This is the pre-publisher copy delivered to Brill. The official publication should be consulted for citation: Todd R. Hanneken, “New Technology for Imaging Unreadable Manuscripts and Other Artifacts: Integrated Spectral Reflectance Transformation Imaging (Spectral RTI).” In Ancient Worlds in a Digital Culture. Edited by Claire Clivaz, Paul Dilley, and David Hamidović. Digital Biblical Studies 1. Leiden: Brill (2016), 180–195.
In the twenty-first century advances in digital technology are propelling the study of ancient literature and scribal culture. This essay describes an integrated set of advances in image capture, processing, and dissemination that improves upon first-hand experience and harnesses the power of the web to connect people and data. Illegible manuscripts are an area of particular interest among all the cultural heritage artifacts that benefit from these advances. Spectral RTI makes it possible to distinguish letters and other evidence of use based on traces as subtle as the corrosion of parchment where ink had once been and the spectral signature of ink stains that cannot be distinguished by the human eye. WebGL and open standards are making it possible to link to interactive enhanced images from anywhere on the web, and to link from the images to annotations and tools for analysis.
Twenty-first century developments in the study of ancient literature through discovery of manuscripts can be distinguished in comparison to a prime example of twentieth-century manuscript discovery, the Dead Sea Scrolls. The Dead Sea Scrolls made a major impact on scholarly understanding of the origins of Christianity and Rabbinic Judaism primarily through the recovery (transcription) of text and comparison with known texts such as the Masoretic text, Septuagint, New Testament, and Pseudepigrapha. In the twenty-first century the same desire to recover partially known or unknown texts from antiquity persists, but is joined by growing interest in material philology and manuscript culture. Manuscripts are increasingly appreciated not simply as text containers, but as independently valuable artifacts which demonstrate scribal practices, including interesting mistakes, corrections, marginal annotation, decoration, and evidence of use. The Dead Sea Scrolls were excavated by archaeologists following chance discovery at a remote site. In the twenty-first century the “caves” to be excavated are collections of palimpsests and other unreadable documents already in libraries and museums. Once excavated, The Dead Sea Scrolls were studied by a small number of people and, decades later, published in expensive volumes with transcriptions and basic photographic plates. In the twenty-first century the benefits of a loose hold on intellectual property are increasingly recognized, and digital technology makes it possible for a digitized artifact to be copied and transmitted with no degradation and little cost. The Dead Sea Scrolls were edited and then published, but in the twenty-first century a digitized artifact can be published and then edited by a community of scholars using web-based tools for analysis, annotation, and collaboration.
The following sections will describe the significance of a related set of advances in the digital humanities. First, Reflectance Transformation Imaging (RTI) makes it possible to capture and visualize fine texture in artifacts such as inscriptions, and at high resolution, also the texture of parchment. Second, spectral imaging dramatically improves upon the spectral range and resolution of the human eye in seeing and distinguishing colors. Third, Spectral RTI combines the advantages of the first two technologies producing a result that is greater than the sum of its parts. Finally, open web standards open new doors for accessing, viewing, and annotating Spectral RTI images.
Reflectance Transformation Imaging captures and visualizes the texture of artifacts. RTI images allow the user to move a virtual light source to any angle to bring out the texture through highlights and shadows. The benefits of RTI are most apparent when imaging artifacts such as cuneiform tablets, inscriptions, and coins in which texture is the primary conveyor of meaning. A photograph of a cuneiform tablet taken with diffuse lighting is unreadable because we rely on highlights and shadows to see the texture. Sometimes a single angle of illumination can make an entire tablet readable, but often different angles are most helpful for different portions. RTI calculates the texture of each pixel in the frame, so at high resolution it can be useful even for artifacts such as manuscripts in which texture is not the primary intended conveyor of meaning. At very high resolutions it is possible to see the thickness of ink rising over the parchment or the corrosion of the parchment where acidic ink had once been, even if the ink itself is completely erased. This ability opens a significant new door to recovering text from palimpsests. When manuscript specialists assert that a photograph cannot substitute for first-hand inspection, they typically mean a variety of observations that come down to texture and interactivity: the feel of the parchment, distinguishing an ink mark from a glob of dirt or flaw in the parchment, making out a letter by moving the folio with respect to the light, identifying rough, smooth, and shiny materials, and so forth. RTI captures fine texture and renders it interactively in ways that can surpass first-hand inspection.
Three key differences distinguish RTI from raking-light photography. First, with raking-light photography the photographer decides which angle of illumination is helpful. With RTI the user can experiment and decide which angles are helpful. Second, a raking-light image is static, but an RTI image can be relit fluidly. This capitalizes on the human perception of motion and the ability to interpret an object more effectively from an image that changes with a moving virtual light source. Third, raking-light photography captures appearance at a particular angle of illumination, but RTI calculates mathematically the texture of each pixel. This data can be exploited to create enhancements. For example, the “visualize normals” function in the RTI viewer shows none of the actual color of the object but rather represents each angle of texture as its own color. This provides a quick overview of the texture of an object. Specular enhancement maintains some of the actual color appearance of the object but makes the object appear shinier, as if coated in silver. The visual enhancements, along with the interactivity of the virtual lights, create a process of discovery that has great potential for scholarship and for teaching; students can interact and experiment with a digital facsimile of an object that they otherwise might never see except in a still photograph or fixed in a museum case.
In the past the technique of capturing RTI images worked best on small flat objects such as coins and cuneiform tablets. Recent developments are expanding three boundaries of RTI. First, an RTI capture sequence relies on the camera and object remaining perfectly still. As a result an RTI image shows only one view of the object (at a time), while the lighting changes. It is possible to switch between RTI images of recto and verso or other views of an object, but the capture sequence must be performed separately for each view. Simply put, RTI provides a texture map for a 2D image (some have called it 2.5D) but it is not 3D. A limited number of 2D perspectives works well for flat or mostly flat objects such as coins, manuscript pages, and bas reliefs, but not for statutes or monumental architecture. The desire for a truly three-dimensional model often, but not necessarily, correlates with a desire to digitize a larger object. While RTI alone is not effective at creating large-scale three-dimensional models (a limited application is possible on some simple objects), it can be used in conjunction with other technologies, such as laser-scanning and photogrammetry, which provide the large-scale model. In these cases RTI could be used to create high-detail texture maps that can be mapped onto surfaces in the three dimensional model. The basic premise of a high-resolution texture map that responds differently to changing virtual angles of light and moves in three-dimensional space is well-established, and is perhaps most commonly encountered in video games. The challenges are in capture and implementation, and would increase with the number of discrete surfaces to be texture mapped.
A second feature of the capture method has also limited the size of objects to be captured, but the limitation can be overcome. A sequence of image captures for RTI requires a physical or virtual dome around the object. Thirty-five to seventy images are captured with the only variable being the position of the light source in the dome. That sequence of images is then processed by a fitter that calculates the surface texture of each pixel. The limitation here is that the fitter assumes that the angle of illumination is the same for every point in the field of view; however, this assumption is flawed in as much as a point of light on one side of the object strikes its near side more directly than its far side. The simplest solution used in the past has been to move the light further away to reduce the variation from this effect. Consequently the radius of the physical or virtual dome must be three to four times greater than the diameter of the object to be studied. Again, this works well on small objects but poses practical problems for large objects in enclosed spaces. One approach to correct for the variability of angle of light incidence would be to use a light source that shines a beam in only one direction rather than a point shining in all directions. These light sources are called “collimated” or “colonnaded” because all the light travels in parallel like the columns of a colonnade. However, the beam would require a diameter at least equal to the diameter of the object to be imaged, posing a variation on the same problem. The more likely solution on the horizon was anticipated by the original developers of RTI but has not yet been implemented.[1] This would be a software solution that corrects for the variability of angle of light incidence using geometry. That is, if one could measure the angle of illumination striking two points on a plane and knew the exact relative position of the camera, one could triangulate the angle of illumination for each individual pixel.[2] This approach would require some one-time work in software development, but subsequently would add negligibly to (or likely reduce overall) the labor of capture and processing. Although the will and capital have not yet been found, the potential is strong.
[1] Tom Malzbender, Dan Gelb, and Hans Wolters, “Polynomial Texture Maps,” Computer Graphics, Proceedings of ACM SIGGRAPH (2001), http://www.hpl.hp.com/research/ptm/papers/ptm.pdf. Carla Schroer and Judy Bogart, “Reflectance Transformation Imaging: Guide to Highlight Image Processing,” Cultural Heritage Imaging (2011), http://culturalheritageimaging.org /What_We_Offer/Downloads/rtibuilder/RTI_hlt_Processing_Guide_v14_beta.pdf, p. 15.
[2] For readers familiar with the method, this explains why it is considered standard to place two shiny spheres in the field of view, even though currently only one can be used.
A third feature of the capture method has similarly limited the size of objects to be captured, but this limitation is already being overcome. The spatial resolution of the camera (typically measured in megapixels) limits the size and spatial resolution of RTI images. If one images a coin, a portion of a manuscript page, or a small scroll fragment it is possible to see very fine detail, including the thickness of the ink, the corrosion due to ink, the hair and flesh features of parchment, and the scraping marks of a palimpsest. However, to image a full page of a medieval manuscript or a larger inscription would quickly reduce the spatial resolution. To give some perspective a “full HD” screen is 2.1 megapixels, and “4K” screens are 8.3 megapixels. To study cultural heritage objects, one would want to be able to zoom in on detail. Fortunately, the spatial resolution of digital cameras is moving ever upward. At the time of writing in 2015, Canon has released a 50 megapixel camera (EOS 5DS), sufficient to resolve a frame of 14.48×9.65 inches (36.78×24.51 centimeters) at 600 dots per inch (dpi).[3] One solution for high-resolution imaging of larger objects is simply to wait for camera resolution and availability to increase. Another solution would be to use “stitching” methods to combine multiple images into a single large image. This is more complicated with RTI images and would likely leave some errors and artifacts. The highest-quality solution is to move beyond the limitations of mass-market photography (e.g., Canon, Nikon) to more specialized cameras (e.g., MegaVision, Phase One). On the one hand, it is certainly an advantage of RTI that it can be performed with inexpensive cameras. On the other hand, for projects that justify the extra expense of renting or purchasing a better camera it is possible to achieve higher spatial resolution at much higher quality. The quality difference results from several factors. First, mass-market cameras are made to be highly portable and compact, and so use a small sensor (36×24 millimeters). Doubling the size of the sensor improves quality greatly. Mass-market cameras also use distorting filters such as a Bayer filter that divides light into red, green, and blue sub-pixels. These distortions are compromises required to capture color in an action sequence, but are not necessary in the controlled environment (motionless object and camera) required by RTI. Spectral imaging, described in the next section, already uses the more advanced cameras; integrated Spectral RTI, described in the following section, has brought these advantages to RTI. Recent and foreseeable advances promise to extend the power of RTI already demonstrated on small flat objects to larger and more dimensional cultural heritage artifacts.
[3] Previously, 22 megapixels was considered high-resolution, allowing 600 dpi resolution of a frame as large as 8.64×5.76 inches (21.95×14.63 centimeters).
Spectral imaging greatly improves upon the ability of the human eye to perceive color. This can take the form of creating highly accurate and standard color images (without subjective human color matching). It can also take the form of enhancements that allow us to see the invisible and distinguish the indistinguishable. The technology was developed especially to study damaged manuscript fragments such as the Dead Sea Scrolls and deliberately erased manuscripts (palimpsests).[4] Palimpsests frequently preserve ancient or marginal literature because reusing parchment was more cost-effective than creating new parchment. However, it can be extremely difficult to read the erased text partly because of the diligence of the scribe charged with scraping the parchment clear of visible ink traces (visible to the scribe, that is). Reading the erased text is further challenged by the new text and centuries of decay and poor conservation practices (including the application of chemical tinctures designed shortsightedly to enhance readability). The effect is often that a palimpsest appears as a jumble of browns. Whether one is an interior decorator or a scholar of ancient literature, browns push the limit of our ability to describe and distinguish shades because the three color receptors of the human eye are all being triggered. In the study of paintings one encounters deliberate color matching for purposes of restoration or forgery. Spectral imaging is able to distinguish materials of different spectral signatures—mainly materials of different chemical composition—that may be indistinguishable to the eye. Consequently it becomes easy to distinguish paints and inks of different materials and time periods. This is also especially useful when trying to identify a mark as a flaw in the parchment, the ink of the scribe, or a stain from some other cause.
[4] See especially Reviel Netz et al., The Archimedes Palimpsest, 2 vols., The Archimedes Palimpsest Publications (Cambridge: Cambridge University Press, 2011).
Spectral imaging works by capturing and processing a range and resolution of color far surpassing the human eye. The color range of the human eye extends from blue to red, excluding ultraviolet on one end and infrared on the other. It has long been known that one or both of these bands can be helpful for showing “invisible” contrasts, such as ink traces that could not be seen by an erasing scribe, or similarly to a conservator or forger of paintings. Particularly in the days before digital imaging, a single monochrome image taken through a filter that passes only ultraviolet or infrared light could be very useful. Sometimes such an image can still be useful, and some continue to use the term “multi-spectral” to refer to an image showing as little as one wavelength on the spectrum if the wavelength is outside the visible range. However, in the past decade spectral imaging has improved greatly by extending not only the range of color perception but also the resolution. The human eye resolves three colors (someone is called “colorblind” if able to resolve only two). The additional colors we see are all combinations of red, green, and blue. We see “stripes” of color in a rainbow for that reason. Those stripes can be thought of as the limit of color resolution much as “pixilation” shows the limit of spatial resolution. Just as color perception increases exponentially from a person with two-band resolution (colorblind) to three, so too resolving sixteen bands greatly increases the ability to distinguish, for example, reds. When distinguishing browns it is possible to distinguish the spectral signature of one material from another based on patterns across all sixteen bands.
It may seem counter-intuitive that spectral imaging requires a monochrome camera. A monochrome camera sensor simply measures the intensity of light at any given pixel and is indifferent to the wavelength of that light. Color information is determined not by the camera sensor but by the environment. In the past, filters were placed on the camera to allow only a narrow band of wavelength to pass to the film or sensor. The problems were that the filter introduced some distortion and a significant loss of total light, such that the object had to be illuminated more brightly overall. This was a concern to conservators. Rather than bombarding the object with light and filtering out all but some of the light, the solution was to illuminate the object only with light of a certain wavelength. This became easier with advances in LED lighting.[5] Given a darkened room, one can illuminate an object with only a certain wavelength and generally conclude that light striking the sensor of the camera is reflected light of that wavelength.[6] An indirect benefit of this method is greatly improved spatial resolution. Because the camera is charged with measuring only intensity of luminance (while the lighting system manages color) it can do so much more accurately. As discussed above, mass-market cameras use distorting filters to try to capture three colors at once. Spectral imaging captures one color at a time in a sequence of sixteen captures. The process is automated, so requires only a little more time. As with RTI, it requires the camera and object to remain motionless throughout the sequence. It also requires a generally dark room. For these reasons spectral imaging is normally conducted in controlled indoor environments. Spectral imaging has been proven on major projects (perhaps most famously the Archimedes Palimpsest Project), but cost remains the greatest impediment to wider uptake. A complete imaging lab could cost $100,000. Renting is frequently the preferred option. Equipment could be rented for a one-week capture session for approximately $10,000. Projects are underway to reduce that cost, particularly for objects that can be transported to an established lab.
[5] Roger L. Easton, Jr., William A. Christens-Barry, and Keith T. Knox, “Ten Years of Lessons from Imaging of the Archimedes Palimpsest,” Eikonopoiia, Digital Imaging of Ancient Textual Heritage, Technical Challenges and Solutions (2010), http://www.cis.rit.edu/DocumentLibrary/admin/uploads/CIS000087.pdf, p. 19.
[6] The only exception is in the case of UV phosphorescence, in which case filters can be used to distinguish reflected from phosphoresced light.
Only part of the innovation in spectral imaging pertains to the camera and illumination systems. Capture technology alone would only inundate the scholar with a large amount of data in the form of a series of grayscale images differing in abstract ways. The next step in spectral imaging is to process the data into a reasonable number of meaningful images. The development of digital imaging technology opened a new realm of possibilities for advanced mathematical processing. Spectral image processing (along with RTI) relies on the principle of “registration,” meaning that any given pixel coordinates represent the same spot on the object throughout the sequence of images. This would have been difficult or impossible with analog photography or even a digitized set of analog negatives. An image that is “born” digital and fully registered makes it possible to perform advanced calculations in precise and standard ways.
Although there are many processing techniques and variations available, three provide a good summary of the potential: Accurate Color, PCA Pseudocolor, and Extended Spectrum. Accurate color is the least sensational of the three but perhaps most important for conservation and dissemination of an image that closely adheres to reality. One need only search Google Images for a piece of fine art to see that color renderings can make a big difference. Digital imaging professionals have tools to help them reproduce color accurately, but the process inevitably relies on subjective evaluation of matching to correct for digitization sensors that are imperfectly analogous to the receptors of the human eye. Spectral imaging resolves more colors in the first place and lacks the distortion of Bayer filters that divide pixels into red, green, and blue sub-pixels (RGB, or RGGB, because of the uneven distribution of receptors in the eye).[7] For this reason Accurate Color renderings can be defined more precisely and consistently from calibrated color-checker targets. One can be certain of the accuracy of the color before it ever reaches a screen or printer. Future scholars, presumably with even more accurate screens and printers, will not have to correct for the limitations of the technology used today for color processing.
[7] Ken Boydston, “N-Shot Multi-Spectral Capture,” MegaVision, http://www.mega-vision.com/multispectral.html.
The other extreme of spectral image processing, PCA Pseudocolor, makes no attempt to capture actual appearance; rather, it brings out invisible contrasts and renders them in a stark way that capitalizes on the color perception we do have.[8] Principal Component Analysis (PCA) is a mathematical statistical technique that eliminates redundancy in a large data set (as we have from the capture sequence). Essentially, this means it finds the greatest contrast that can be found across the color range, and the second greatest, and third, and so forth to a number equal to the size of the initial data set (number of input images). In the case of a palimpsest, one or more of these will correlate to the contrast between the traces of erased ink and the surrounding materials. In the case of a painting, one of these might be the contrast between paints that appear to match but have different material composition and therefore different spectral signatures. Any one of these monochrome derivatives may be interesting, but they can be made more interesting when two or three of them are mapped to the three color ranges we naturally distinguish. The resulting Pseudocolor bears no resemblance to reality but rather visualizes the greatest contrasts to be found in an image or region that otherwise may appear muted or homogenous.
[8] Easton, Christens-Barry, and Knox, “Ten Years of Lessons” pp. 16-17.
A third processing technique, Extended Spectrum, can be thought of as splitting the difference between Accurate Color and PCA Pseudocolor. Extended Spectrum uses PCA to find the greatest contrasts not across the entire sequence of sixteen or so images, but across the bottom third with the shortest wavelengths, the middle third, and the top third with the longest wavelengths. The greatest contrast in the short “blueish” range is mapped to the blue channel, the greatest contrast in the middle “greenish” range is mapped to the green channel, and the greatest contrast in the longest “reddish” range is mapped to the red channel.[9] This has three advantages. First, it utilizes PCA to discover contrasts between reds, greens, and blues that we could not distinguish with the eye. Second, it extends beyond the visual range and simulates within the visual range what we might see if we were able to see infrared and ultraviolet. For this reason a parchment page in a palimpsest may appear blue in an Extended Spectrum rendering if it reflects more ultraviolet (translated to blue) than the traces of erased ink. Similarly, the black microfiber sometimes used as a background in imaging projects is black in the visible range but does reflect infrared. Thus if an Extended Spectrum region of interest includes black microfiber it will appear red. The third advantage is that the color, though exaggerated, bears a recognizable resemblance to reality because it assumes the three-part classification of color expected by the eye. PCA Pseudocolor will make it obvious that a spot is different in spectral signature than other spots, but once noticed, Accurate Color and Extended Spectrum give a better chance of identifying a material explanation of the difference.
[9] Todd R. Hanneken, “Integrating Spectral and Reflectance Transformation Imaging for the Digitization of Manuscripts & Other Cultural Artifacts,” NEH Office of Digital Humanities White Papers (2014), https://securegrants.neh.gov/PublicQuery/main.aspx?f=1&gn=HD-51709-13, p. 7. Also available at http://palimpsest.stmarytx.edu/integrating.
Spectral RTI combines the advantages of RTI with the advantages of spectral imaging. It has all the interactivity and texture enhancements of RTI, along with the high spatial and color resolution of spectral imaging with its advanced color processing. It has the disadvantage of spectral imaging in cost, but the cost is only negligibly greater than spectral imaging alone (a flash or equivalent must also be acquired). It has the disadvantage of RTI in terms of the time required to capture the longer sequence of images, which ranges from five to twenty minutes depending on equipment and number of captures. Adding a narrowband spectral capture sequence to RTI adds only two or three minutes to the capture time because the object mounting, lens focus, and so forth do not need to be redone. Web dissemination of RTI images is more challenging than that of static images, but that gap is closing as described in the following section. None of the disadvantages pertain to quality or come close to the disadvantages of using each technology separately.
The integration of spectral and RTI is greater than the sum of its parts. Neither alone is an adequate replacement for what a scholar would like to do when studying an object. A scholar would move the object, look carefully at detail, and compare color all at the same time. One advantage to the integration is simple convenience of looking at texture and color in a single view rather than switching between windows or aligning them next to each other. Because texture and color information are complementary, seeing both together is helpful. For example, in a palimpsest it is sometimes possible, even in a single letter, to discern one stroke because a trace of ink stain survived the re-scraping, and another because the corrosion of the parchment where the ink had been can be discerned. Finally, the identification of marks on an object often requires answers to two simple but different questions: 1) Does it rise over the surface or recess into it? 2) What color is it and what else on the object is the same color? If digital facsimiles are to claim equity or superiority over first-hand inspection, such basic questions of the human discovery process should be easily answerable. Spectral RTI can answer these questions more easily than either technology alone, and, with digital enhancements, sometimes more easily than first-hand inspection.
The basic concept of the integration works on the premise that texture information and color information are distinct, and the two technologies developed to digitize and render them use non-conflicting methods. RTI is concerned with texture and pays no more attention to color than the default features of the camera. Spectral imaging is concerned with color and typically strives for the most diffuse and even light possible.[10] Spectral imaging can be done with raking light, but still suffers from the three drawbacks of raking-light photography compared to RTI described above. The methods do not interfere because texture mapping relies entirely on variations in luminance (highlights and shadows when the light is at a certain angle of incidence); meanwhile high-color-resolution processing relies on properties of chrominance, the colors of light reflected at a particular pixel. Some color spaces used to represent brightness and color are structured around that distinction between luminance and chrominance. For example, early television broadcast technology presumed monochrome (luminance without chrominance) images. As color televisions were introduced, compatibility of the broadcast with older televisions was maintained by keeping the luminance sub-channel and adding two additional sub-channels for color. This is typically called the YCC or YCbCr color space, with Y representing luminance (luma), Cb representing chrominance on the blue axis, and Cr representing chrominance on the red axis. LAB and other color spaces rely on the same principle. The simplest approach to creating Spectral RTI images is to map the luminance information from monochrome RTI captures to the Y channel, and the chrominance information from spectral imaging to the Cb and Cr channels. As with color televisions, a straight-forward conversion to RGB is necessary before rendering on screen. Other color spaces and methods promise to maintain high bitrates, although presently 48-bit or 96-bit color could not be rendered on a computer screen anyway. Additional technical information can be found in the white paper of the “Integrating Spectral and Reflectance Transformation Imaging” project on the websites of the National Endowment for the Humanities and the Jubilees Palimpsest Project.[11]
[10] Even subtle unevenness in lighting is sometimes corrected through a process called “flattening.”
[11] See note 9 above.
Several priorities are on the horizon for future progress. One priority is to simplify the tools, and another to reduce the equipment cost; both will certainly improve with time and economy of scale. Another experimental project on the horizon will use the same premise of grafting texture and color information using transmissive lighting (backlighting). Backlighting is important for the study of palimpsests because sometimes a hole or thinness of parchment is a good indicator of where ink had once been, as well as of punctures and scoring marks used by scribes to “line” the parchment. This evidence is complementary to the texture evident from RTI. The combination promises tremendous potential for the study of objects thin enough for light to pass through. Also in the category of basic questions that digital facsimiles should be able to answer quickly one could include, “Is that a mark or a hole?” Transmissive light could answer that question most efficiently. It seems likely that other innovations will build on the fundamental power of digital imaging to create data sets of perfectly registered images. This moves the study of visual artifacts from looking at and comparing pictures to processing and rendering data sets in a limitless variety of ways.
The digital humanities promise to build on the capability of digital information to be copied without loss of quality and to be transmitted with ease through the Internet and other digital media. The promise is that many scholars on different continents can collaborate in working on an artifact at the same time. The promise is not always realized. In the case of RTI images, now integrated with spectral imaging, we can describe three levels of access to cultural heritage. Roughly, those levels are no public access at all, public access through a closed system, and public access using open standards for interoperability on the web.
The first level, no public access, remains the most common as RTI images are often generated for private parties with no plan for dissemination or even preservation of the data. One can hope that the images will someday be available, or that insights resulting from private study will be published, but it must be admitted that this is a minimal level of service to the humanities. Intellectual property concerns may be the most common justification for not publishing digital images. While this is a complex issue, it can be observed that an increasing number of rights holders are recognizing that intellectual property increases in value as more people know about it. A more concrete factor is simply the difficulty of making images accessible. Much as it is easier to buy a book than to build a library, it is easier to acquire a digital facsimile for private use than to curate online dissemination. It does not help that RTI files themselves are large, not recognized by web browsers, and require special software (but see below for WebRTI). Before the “download” attribute in HTML 5, following a link to an RTI file in an HTML 4 webpage would show a screen of nonsense characters. Along with the familiar goal of promoting open access to intellectual property, a clear goal of humanists working with RTI is to move toward ease of publishing RTI images.
The second level, online dissemination in a closed system, is exemplified by the InscriptiFact Digital Image Library. This library is open in that it is free to create an account and that the Java applet can be downloaded for free and run on a Windows, Mac, or Linux computer. It is also a robust library not only in the size of the collection but also in the metadata standards that make the library browse-able and searchable within the applet. The collection of over 500,000 images (RTI and still) originated out of West Semitic Research led by Bruce Zuckerman and thus maintains its traditional focus on the ancient Near East. For scholars of this region and time period it is a very valuable resource. Nevertheless, the system is closed in several key ways and has been outpaced by some recent developments. First, there is no ability for users to upload images or annotations; the library is strictly read-only. The applet itself is free to use but closed source, leaving a somewhat slow pace of new feature addition and bug fixes (exacerbated by failures of Java to live up to its mission of being secure while maintaining platform independence). Though a powerful tool for specialists, it is not an inviting experience for casual web surfers who must create an account, install software with administrator permissions, and ask network administrators to open unusual ports in the firewall if they are not open already. Casual browsing is also impeded by the requirement that an entire RTI file download before the image can be viewed. As those images grow to hundreds of megabytes, limits in bandwidth and system resources quickly become apparent. However, the most fundamental limitation of the InscriptiFact system in its current state of development is that it is online but not on the web. There is no way to link from a webpage (or other web resource) to a particular image or view within the library, and there is no way to link an image in the library to annotation and other tools elsewhere on the web.
The third level, public access using open standards for interoperability on the web, is presently emerging as the future of dissemination for RTI images. The breakthrough that led to RTI interoperability on the open web began with WebGL, a graphics library now built into every major web browser, including smart phones. The processing required to view RTI images can now be done using a system’s graphics processing unit (GPU). With the added layer of WebRTI, it is now possible to disseminate RTI images directly into web browsers without plugins. An RTI image can fill a browser window or be embedded in an ordinary webpage or web environment. WebRTI is also faster because it does not load every detail of the image from the beginning. Like Google Maps, it fetches the detail it needs in real time as the user zooms in or pans around. The current state of the art for WebRTI is exemplified by Palazzo Blu (http://vcg.isti.cnr.it/PalazzoBlu) and other projects linked from the homepage of the author of the project, Gianpaolo Palma (http://vcg.isti.cnr.it/rti/webviewer.php). The future of the project will follow several tracks. For one, as of mid-2015 the web viewer lacks some of the useful enhancement features of the dedicated RTI viewers and the ability to switch between layers and views. More importantly, the potential of WebRTI for web interoperability has not yet been realized. A simple step will be the ability to copy a link directly to a particular view (specifying zoom, pan, and light position). Thus, any webpage or web resource will be able to embed or link to a resource not just in general but in a particular view. The present annotation features will also benefit from compliance with Open Annotation and the ability to contribute and store annotations anywhere on the web.
Once information is published on the open web, search engines such as Google become important tools for discoverability and accessibility. However, even greater accessibility can be achieved through compliance with standards for Linked Open Data. Systems in the “cloud” require more rigid structures of information to work together. Once an image repository complies with those structures, however, it becomes possible to interface with a wide array of tools for viewing, transcription, annotation, and collaboration. The future of online image libraries is not to build a single platform that does everything within its own silo, but to link information to tools that will continue to grow through open collaboration.
Tools for the study of ancient literature and scribal culture are growing rapidly. Along with other areas of digital humanities, the digitization of visual artifacts has changed dramatically in the twenty-first century. We are approaching the point at which the study of a digital facsimile compares favorably in quality to first-hand inspection, especially for objects that are difficult to decipher. Digital enhancements and the power of the “crowd” given open access have the potential to generate significant advances in our understanding of manuscripts and other artifacts. It is reasonable to imagine a future in which critical editions support their claims with links that specify precise views of the evidence. As evidence is tagged and annotated, it will be possible to perform large-scale surveys of features of scribal practice just as it is now possible to search for text. Among many other possibilities, scholars and teachers may dig deeper in thinking about “primary sources,” from printed editions to visually-rich digital copies of the artifacts themselves.
While the opinions expressed in this essay are those of the author alone, the research, innovation, and information described originates from the collaboration of the Jubilees Palimpsest Project (http://palimpsest.stmarytx.edu). The website gives further bibliography and information about the major project contributors who, in addition to the author, are Michael Phelps (Early Manuscripts Electronic Library), Ken Boydston (MegaVision Corporation), William Christens-Barry (Equipoise Imaging), Roger Easton, Jr. (Rochester Institute of Technology), Keith Knox (U.S. Air Force Research Labs, retired), Bruce Zuckerman (University of Southern California), Kenneth Zuckerman (University of Southern California), Marilyn Lundberg (University of Southern California), and Leta Hunt (University of Southern California).
Boydston, Ken. “N-Shot Multi-Spectral Capture.” MegaVision, http://www.mega-vision.com/multispectral.html.
Easton, Roger L., Jr., William A. Christens-Barry, and Keith T. Knox. “Ten Years of Lessons from Imaging of the Archimedes Palimpsest.” In, Eikonopoiia, Digital Imaging of Ancient Textual Heritage, Technical Challenges and Solutions (2010). http://www.cis.rit.edu/DocumentLibrary/admin/uploads/CIS000087.pdf.
Hanneken, Todd R. “Integrating Spectral and Reflectance Transformation Imaging for the Digitization of Manuscripts & Other Cultural Artifacts.” In, NEH Office of Digital Humanities White Papers (2014). https://securegrants.neh.gov/PublicQuery/main.aspx?f=1&gn=HD-51709-13.
Malzbender, Tom, Dan Gelb, and Hans Wolters. “Polynomial Texture Maps.” In, Computer Graphics, Proceedings of ACM SIGGRAPH (2001). http://www.hpl.hp.com/research/ptm/papers/ptm.pdf.
Netz, Reviel, William Noel, Nigel Wilson, and Natalie Tchernetska. The Archimedes Palimpsest. The Archimedes Palimpsest Publications. 2 vols. Cambridge: Cambridge University Press, 2011.
Schroer, Carla, and Judy Bogart. “Reflectance Transformation Imaging: Guide to Highlight Image Processing.” Cultural Heritage Imaging (2011). http://culturalheritageimaging.org /What_We_Offer/Downloads/rtibuilder/RTI_hlt_Processing_Guide_v14_beta.pdf.
In the twenty-first century advances in digital technology are propelling the study of ancient literature and scribal culture. This essay describes an integrated set of advances in image capture, processing, and dissemination that improves upon first-hand experience and harnesses the power of the web to connect people and data. Illegible manuscripts are an area of particular interest among all the cultural heritage artifacts that benefit from these advances. Spectral RTI makes it possible to distinguish letters and other evidence of use based on traces as subtle as the corrosion of parchment where ink had once been and the spectral signature of ink stains that cannot be distinguished by the human eye. WebGL and open standards are making it possible to link to interactive enhanced images from anywhere on the web, and to link from the images to annotations and tools for analysis. The essay proceeds in four sections. First, Reflectance Transformation Imaging (RTI) makes it possible to capture and visualize fine texture in artifacts such as inscriptions, but at high resolution also the texture of parchment. Second, spectral imaging dramatically improves upon the spectral range and resolution of the human eye in seeing and distinguishing colors. Third, Spectral RTI combines the advantages of the first two technologies producing a result that is greater than the sum of its parts. Finally, open web standards open new doors for accessing, viewing, and annotating Spectral RTI images.
Todd R. Hanneken, Ph.D. is Associate Professor of Theology and Early Jewish Literature at St. Mary’s University in San Antonio. He is the director of the Jubilees Palimpsest Project.