And if so, why would you ever want to? About a year ago the University of Iowa Libraries Special Collections announced a rather exciting project, to digitize the data tapes from the Explorer I satellite mission. My first thought: the data on these tapes is digital to begin with, so there’s not really something to digitize here. They explain, the plan is to “digitize the data from the Explorer I tapes and make it freely accessible online in its original raw format, to allow researchers or any interested parties to download the full data set. “ It might seem like a minor point for a stickler for vocabulary, but that sounds like transferring or migrating data from its original storage media to new media.
To clarify, I’m not trying to be a pedant here. What they are saying is clear and it makes sense. With that said, I think there are actually some meaningful issues to unpack here about the difference between digital preservation and digitization and reading, encoding and registering digital information.
Digitization involves taking digital readings of physical artifacts
In digitization, one uses some mechanism to create a bitstream, a representation of some set of features of a physical object in a sequence of ones and zeros. In this respect, digitization is always about the creation of a new digital object. The new digital object registers some features of the physical object. For example, a digital camera registers a specific range of color values and a specific but limited numbers of dots per square inch. Digital audio and video recorders capture streams of discrete numerical readings of changes in air pressure (sound) and discrete numerical readings of chroma and luminance values over time. In short, digitization involves taking readings of some set of features of an artifact.
Reading bits off old media is not digitization
Taking the description of the data tapes from the Explorer I mission, it sounds like this particular project is migrating data. That would mean reading the sequence of bits off their original media and then make them accessible. On one level it makes sense to call this digitization, the results are digital and the general objective of digitization projects is to make materials more broadly accessible. Moving the bits off their original media and into an online networked environment feels the same, but it has some important differences. If we have access to the raw data from those tapes we are not accessing some kind of digital surrogate, or some representation of features of the data, we would actually be working with the original. The alographic nature of digital objects, means working with a bit for bit copy of the data is exactly the same as working with the bits encoded on their original media. With this noted, perhaps most interestingly, there are times when one does want to actually digitize a digital object.
When we do digitize digital objects
In most contexts of working with digital records and media for long term preservation, one uses hardware and software to get access to and acquire the bitstream encoded on the storage media. With that said, there are particular cases where you don’t want to do that. In cases where parts of the storage media are illegible, or where there are issues with getting the software in a particular storage device to read the bits off the media there are approaches that bypass a storage devices interpretation of it’s own bits and instead resort to registering readings of the storage media itself. For example, a tool like Kryoflux can create a disk image of a floppy disk that is considerably larger in file size than the actual contents of the disk. In this case, the tool is actually digitizing the contents of a floppy disk. It stops treating the bits on the disk as digital information and shifts to record readings of the magnetic flux transition timing on the media itself. The result is a new digital object, one from which you can then work to interpret or reconstruct the original bitstream from the recordings of the physical traces of those bits you have digitized.
So when is and isn’t it digitization?
So, it’s digitization whenever you take digital readings of features of a physical artifact. If you have a bit for bit copy of something, you have migrated or transferred the bitstreams to new media but you haven’t digitized them. With that said, there are indeed times when you want to take digital readings of features of the actual analog media on which a set of digital objects are encoded. That is a situation in which you would be digitizing a set of features of the analog media on which digital objects reside. What do you think? Is this helpful clarification? Do you agree with how I’ve hashed this out?
After reading a great post by the Smithsonian Institution Archives on Archiving Family Traditions, I started thinking about my own activities as a steward of my and my family’s digital life.
I give myself a “C” at best.
Now, I am not a bad steward of my own digital life. I make sure there are multiple copies of my files in multiple locations and on multiple media types. I have downloaded snapshots of websites. I have images of some recent important texts. I download copies of email from a cloud service into an offline location in a form that, so far, I have been able to read and migrate across hardware. I have passwords to online accounts documented in a single location that I was able to take advantage of when I had an sudden loss.
I certainly make sure my family is educated about digital preservation and preservation in general, to the point that I think (know?) they are sick of hearing about it. I have begun a concerted but slow effort to scan all the family photos in my possession and make them available with whatever identifying metadata (people, place, date) that I gathered from other family members, some of whom have since passed away. I likely will need to crowdsource some information from my family about other photos.
But I am not actively archiving our traditions. I often forget to take digital photos at events, or record metadata when I do take them. I have never collected any oral histories. I have not recorded my own memories. I do have some of my mother’s recipes (and cooking gear) and I need to make sure that these are documented for future generations. I have other items that belonged to my mother and grandmother that I also need to more fully document so others know their provenance and importance. And then I need to make sure all my digital documentation is distributed and preserved.
I asked some friends what they were doing, and got some great answers. One is creating a December Daily scrapbook documenting the activities of the month. One has been documenting the holiday food she prepares and family recipes for decades, in both physical and digital form. One has been making a photobook of the year for every year since her children were born, and plans to create a book of family recipes. Another has been recording family oral histories, recording an annual family religious service for over 20 years, and is digitizing family photos that date back as far as the 1860s.
How are you documenting and archiving your family’s traditions, whether physical or digital? And preserving that documentation?
With the bwFLA Emulation-as-a-Service you can enable users to view your (interactive) objects without actually giving the environment+object to the user. This is a nice feature, especially for dig. art and similar: you can provide access to an almost unlimited amount of people being able to view, use and interact with a piece of dig. art without being able to copy it. The owner remains in control of the object and is able to restrict access any time.
Jon Thomson (www.thomson-craighead.net) nicely prepared and integrated two of his art pieces for public access using the bwFLA EaaS infrastructure. Please take a look at:
Please be patient it may take a minute or so to load.
For more information about bwFLA and Emulation as a Service please take a look at our website: http://bw-fla.uni-freiburg.de/Preservation Topics: Projects
The humble bloggers who toil on behalf of The Signal strive to tell stimulating stories about digital stewardship. This is unusual labor. It blends passion for a rapidly evolving subject with exacting choices about what to focus on.
Collecting, preserving and making available digital resources is driving enormous change, and the pace is so fast and the scope so broad that writing about it is like drinking from the proverbial firehose.
Back when The Signal was a mere eye gleam, institutional gatekeepers were, as is their wont, skeptical. “Can you make digital preservation interesting?” They asked. “Is there enough to write about? Will anyone care?”
While we responded with a bureaucratic version of “yes, of course!” to each question, we had to go prove it. Which, after many months and hundreds of posts, I think we have done.
I attribute success to stories that have meaning in the lives of our readers, most of whom care deeply about digital cultural heritage. As noted, that topic is as diverse as it is dynamic. A good way to gauge this is to consider the range of posts that were the most popular on the blog for the year. So here, ranked by page views based on the most current data, are our top 14 posts of 2013 (out of 257 total posts).
- 71 Digital Portals to State History
- You Say You Want a Resolution: How Much DPI/PPI is Too Much?
- Is JPEG-2000 A Preservation Risk?
- Scanning: DIY or Outsource
- Snow Byte and the Seven Formats: A Digital Preservation Fairy Tale
- Social Media Networks Stripping Data from Your Digital Photos
- Fifty Digital Preservation Activities You Can Do
- Announcing a Free “Perspectives on Personal Digital Archiving” Publication
- Top 10 Digital Preservation Developments of 2012
- Analysis of Current Digital Preservation Policies: Archives, Libraries and Museums
- The Metadata Games Crowdsourcing Toolset for Libraries & Archives: An Interview with Mary Flanagan
- Doug Boyd and the Power of Digital Oral History in the 21st Century
- Moving on Up: Web Archives Collection Has a New Presentation Home
- Anatomy of a Web Archive
Special bonus: Page views are only one way to measure top-of-the-yearness. In the blogging world, comments are also important, as they indicate the degree to which readers engage with a post. By that measure, the top 14 posts of 2013 are slightly different.
- 71 Digital Portals to State History (51 comments)
- Snow Byte and the Seven Formats: A Digital Preservation Fairy Tale (21 comments)
- Is JPEG-2000 A Preservation Risk? (17 comments)
- 39 And Counting: Digital Portals to Local Community History (16 comments)
- Social Media Networks Stripping Data from Your Digital Photos (14 comments)
- You Say You Want a Resolution: How Much DPI/PPI is Too Much? (13 comments)
- What Would You Call the Last Row of the NDSA Levels of Digital Preservation? (12 comments)
- CURATEcamp Exhibition: Exhibition in and of the Digital Age (11 comments)
- Word Processing: The Enduring Killer App (10 comments)
- Older Personal Computers Aging Like Vintage Wine (if They Dodged the Landfill) (10 comments)
- Scanning: DIY or Outsource (10 comments)
- Where is the Applied Digital Preservation Research? (8 comments)
- The “Spherical Mercator” of Time: Incorporating History in Digital Maps (8 comments)
- Opportunity Knocks: Library of Congress Invites No-cost Digitization Proposals (7 comments)
Thank you to all our readers, and most especially to our commenters.
Steven Puglia, manager of Digital Conversion Services at the Library of Congress, died peacefully on December 10, 2013 after a year-long battle with pancreatic cancer. Puglia had a profound effect on his colleagues here in Washington and worldwide, and there is a great outpouring of grief and appreciation in the wake of his passing.
The testimony embedded in this tribute demonstrates that Steve’s passing left the cultural heritage, conservation and preservation communities stunned, somber and affectionate. Their words attest to his character, his influence and the significance of his work. He was a rare combination of subject-matter expert and gifted, masterful teacher, who captivated and inspired audiences.
“Generous” is a word colleagues consistently use to describe Puglia – generous with his time, energy, advice and expertise. He was a pleasure to be around, the kind of colleague you want in the trenches with you – compassionate, kind and brilliant, with a wry sense of humor.
Steve enjoyed sharing his knowledge and helping others understand. From International Standards groups to workshops, from guidelines to desk-side help for colleagues, Steve sought out opportunities to teach. During discussions of how detailed to get in the Guidelines, Steve would often remind us that digitization is, by its nature, a technical endeavor…He worked even harder to make it palatable for those who simply hadn’t gotten it yet. — Jeff Reed, National Archives and Records Administration and co-author with Steve Puglia and Erin Rhodes of the 2004 Technical Guidelines for Digitizing Cultural Heritage Materials: Creation of Raster Image Master Files
Photography defined Puglia’s life — both the act of photography and the preservation and access of photographs. It was at the root of his work even as his professional life grew and branched in archival, preservationist and technological directions.
He earned a BFA in Photography from the Rochester Institute of Technology in 1984 and worked at the Northeast Document Conservation Center duplicating historic negatives. In 1988, Puglia earned an MFA in Photography from the University of Delaware and went to work for the National Archives and Records Administration’s reformatting labs as a preservation and imaging specialist.
At NARA, Puglia worked with microfilm, storage of photographs and establishing standards for negative duplication. With the advent of the digital age, Puglia set up NARA’s first digital imaging department and researched the impact of digital technology on the long-term preservation of scanned images. He was instrumental in developing new methods of digital image preservation and helping to set imaging standards.
I feel very fortunate and thankful that I had the opportunity to work alongside Steve and to learn so much from him; Steve was a smart, inquisitive, kind, generous colleague, but even more so, he was an amazing teacher. He was generous in sharing his vast knowledge of digitization as well as traditional photographic processes and concepts – and the intersection of the two – in the work that we were doing at NARA.
I think writing the Guidelines was a labor of love for all of us, but especially for Steve. We collectively worried about how they would be perceived, how they would be useful, and about all the small details of the document. I remember especially struggling and working on the Image Parameter tables for different document types, all of us knowing these would probably be the most consulted part of the Guidelines. The fact that these tables are still relevant and stand strong today is a testament to Steve’s knowledge and contributions to the field. I feel lucky that I had a chance to learn from Steve; he was my first real mentor. We should all feel lucky to benefit from his knowledge. He will be missed. — Erin Rhodes, Colby College and co-author with Steve Puglia and Jeff Reed of the 2004 Technical Guidelines for Digitizing Cultural Heritage Materials: Creation of Raster Image Master Files
In 2011, Puglia joined the Library of Congress as manager of Digital Conversion Services where he oversaw the research and development of digital imaging approaches, data management, development of tools and other technical support in the Digital Imaging Lab.
It was not his first time working with the Library. In 1991 and 1992 he collaborated with the Preservation Directorate and over the past several years he had been a major contributor to the Federal Agencies Digitization Guidelines Initiative. He became chair of the FADGI Still Image Working Group; in August 2011, he posted an update about the Still Image Working Group on The Signal.
Steve was a driving force in creating guidelines to help steer cultural heritage institutions towards standardized methods for digitizing their treasures. While at NARA, he was the primary author of the Technical Guidelines for Digitizing Cultural Heritage Materials: Creation of Raster Image Master Files, the 2004 document that continues to serve as a teaching tool and reference for all those involved in digital imaging. In 2007, Steve extended his efforts to form the FADGI Still Images Working Group and participated as a key technical member, providing invaluable input on practically every aspect of imaging technique and workflow.
I chaired the group from its start through 2010, and I could not have accomplished half of what I did without Steve. When I was at a loss as to how to best proceed, Steve provided the guidance I needed. He was one of the most genuine and honorable individuals I have known. Steve was selfless in giving his time to anyone who needed assistance or advice, and he will be missed by those who knew him. His passing is a tremendous loss to the cultural heritage imaging community. — Michael Stelmach, former Digital Conversion manager at the Library and past FADGI coordinator.
In reading Puglia’s June, 2011, Signal blog post about the JPEG 2000 Summit, you get a sense of his excitement for his work and a taste of how well he can communicate a complex subject in simple language.
This aspect of Puglia’s character comes up repeatedly: his drive to make his work clearly understood by anyone and everyone. In Sue Manus’s blog post introducing Puglia to readers of The Signal, she writes, “He says the next steps include working to make the technical concepts behind these tools better understood by less technical audiences, along with further development of the tools so they are easier to work with and more suited to process monitoring and quality management.” And “From an educational perspective, he says it’s important to take what is learned about best practices and present the concepts and information in ways that help people understand better how to use available technology.”
Colleagues declare that Puglia was a key figure in setting standards and guidelines. They report that he led the digital-preservation profession forward and he made critical contributions to the cultural heritage community. They praise his foresight and his broad comprehension of technology, archives, library science, digital imaging and digital preservation, all tempered by his practicality. And they all agree that the impact of his work will resonate for a long time.
Sometimes the best discussions–the ones you really learn from–are conversations in which the participants express different ideas and then sort them out. It’s like the college dorm debates that can make the lounge more instructive than a classroom. Over the years, I learned from Steve in exchanges leavened with friendly contrariety. For example, in 2003, we were both on the program at the NARA preservation conference. I was helping plan the new Library of Congress audiovisual facility to be built in Culpeper, Virginia, and my talk firmly pressed the idea that the time had come for the digital reformatting of audio and video, time to set aside analog approaches. Steve’s presentation was about the field in a more general way and it was much more cautious, rich with reminders about the uncertainties and high costs that surrounded digital technologies, as they were revealed to us more than a decade ago.
In the years that followed, our small tug of war continued and I saw that Steve’s skepticism represented the conservatism that any preservation specialist ought to employ. I came to think of him as a digital Descartes, applying the great philosopher’s seventeenth century method of doubt to twenty-first century issues. And like Descartes, Steve mustered the best and newest parts of science (here: imaging science) to build a coherent and comprehensive digital practice.
He may have been a slightly reluctant digital preservation pioneer but without doubt he was a tremendous contributor whose passing is a great loss to friends and colleagues. — Carl Fleischhauer, Library of Congress digital format specialist and FADGI coordinator
Puglia’s ashes will be scattered in New Hampshire along a woodland brook that he loved. A fitting end for a photographer.
From Tuesday 19th November until Thursday 21st November the internal SCAPE Developer’s Workshop was held at the Brno University of Technology.
The overall aims of the workshop were diverse. First of all to get everyone, in particular new partners, up-to-speed and aligned with project work and developments. Second to get a clear understanding of how the new partners' work and existing project work will integrate and what needs to be done to make this happen. Furthermore to identify, understand and work at issues surrounding PT/PC/PW integration and last but not least to productise existing SCAPE tools.
It is always good to meet so many SCAPErs and to work together face to face. It was really inspiring to hear all the demonstrations and presentations and to get a good overview of all the SCAPE activities. The overall feeling was very positive, there were lots of discussions and everyone worked very hard on the needed next steps in this last year of SCAPE.
I was there as a representative of the Take Up sub project. My main focus was the productization of the SCAPE tools and the need for more general information about the tools and SCAPE overall. It was good to see that this need for productization was recognized and we immediately started working on adjusting the Readme files. There will be more follow up actions to get all the general information in place but the Workshop was a good start to address a lot of important SCAPErs: the developers.
This was just one of the many, many things that people were working on.
Overall, it was a good Workshop and our new partner Brno University made sure that we felt very welcome: the hosting was excellent in this former cloister.
A few weeks ago, as part of the Aligning National Approaches to Digital Preservation conference, an announcement was made of the beta launch of a new resource to catalog and describe digital preservation tools: Community Owned digital Preservation Tool Registry.
The idea behind this registry is to try and consolidate all of the digital preservation tool resources into one place, eliminating the need for many separate registries in multiple organizations.
As an example of how this will be useful, at NDIIPP we have our own tools page that we have maintained over the years. Many of the tools on this list have either been produced by the Library of Congress or our NDSA partners –with the overall aim to provide these tools to the wider digital preservation community. Of course, the tools themselves, or the links, change on a fairly regular basis; they are either updated or just replaced altogether. And, as our list has grown, there is also the possibility of duplication with other such lists or registries that are being produced elsewhere. We have provided this to our users as an overall resource, but the downside is, it requires regular maintenance. For now, our tools page is still available, but we have currently put any updates on hold in anticipation of switching over to COPTR.
COPTR is meant to resolve such issues of duplication and maintenance, and to maintain a more centralized, up-to-date, one-stop shop for all digital preservation related tools.
For ease of use, COPTR is presented on a wiki – anyone has access to this in order to add tools to the registry or to edit and update existing ones. Here’s how it’s described by Paul Wheatley, one of the original developers of this effort:
“The registry aims to support practitioners in finding the tools they need to solve digital preservation problems, while reducing the glut of existing registries that currently exacerbate rather than solve the challenge. (I’ve blogged in detail about this.)
COPTR has collated the contents of five existing tool registries to create a greater coverage and depth of detail that has to date been unavailable elsewhere. The following organisations have partnered with COPTR and contributed data from their own registries: The National Digital Stewardship Alliance, The Digital Curation Centre (DCC), The Digital Curation Exchange (DCE), The Digital POWRR Project, The Open Planets Foundation (OPF)”
The above organizational list is not meant to be final, however. Wheatley emphasizes that they are looking for other organizations to participate in COPTR and to share their own tool registries.
On the wiki itself, the included tools are grouped into “Tools by Function” (disk imaging, personal archiving, etc.) or “Tools by Content” (audio, email, spreadsheet, etc.) According to the COPTR documentation, specific information for each tool will include the description and specific function, relevant URLs to the tool or resources and any user experiences. Generally, the tools to be included will be anything in the realm of digital preservation itself, such as those performing functions described in the OAIS model or in a digital lifecycle model. More specifically, the COPTR site describes in-scope vs. out-of-scope as the following:
- In scope: characterisation, visualisation, rendering, migration, storage, fixity, access, delivery, search, web archiving, open source software ->everything inbetween<- commercial software.
- Out of scope: digitisation, file creation
According to Wheatley, the goal is for organizations to eventually close their own registries and instead reference COPTR. The availability of a datafeed from COPTR provides a useful way of exposing COPTR (or subsets of the COPTR data) on their own sites.
This overall goal may sound ambitious, but it’s ultimately very pragmatic: to create a community-built resource that is accurate, comprehensive, up-to-date and eliminates duplication.
COPTR Needs You! To make this effort a success, the organizers are asking for some help:
- Add tools to the list (see the guide here)
- Give feedback
- Promote COPTR
- Consider bringing your organization into partnership with COPTR
- See this “to do” list to help develop COPTR even further.
And feel free to contribute feedback in the comment section of this blog post, below.
COPTR is a community registry that is owned by the community, for the community. It is supported by Aligning National Approaches to Digital Preservation , The Open Planets Foundation , The National Digital Stewardship Alliance, The Digital Curation Centre , The Digital Curation Exchange and the Digital POWRR Project.
The following is a guest post by report co-authors and NDSA Standards and Practices Working Group members:
- Winston Atkins, Duke University Libraries
- Andrea Goethals, Harvard Library
- Carol Kussmann, Minnesota State Archives
- Meg Phillips, National Archives and Records Administration
- Mary Vardigan, Inter‐university Consortium for Political and Social Research (ICPSR)
The results of the 2012 National Digital Stewardship Alliance Standards and Practices Working Group’s digital preservation staffing survey have just been released! Staffing for Effective Digital Preservation: An NDSA Report (pdf) shares what we learned by surveying 85 institutions with a mandate to preserve digital content about how they staffed and organized their preservation functions. You may remember that The Signal blogged about the survey on August 8, 2012 to encourage readers to participate: “How do you staff your Digital Preservation Initiatives?” As promised in that post and elsewhere, the results of the survey are now publicly available and the survey data have been archived for future use.
We’ll highlight some of the significant findings here, but we encourage you to read the full report and let us know what you think – both about the report and the current state of digital preservation staffing.
The NDSA found that there was no dedicated digital preservation department in most organizations surveyed to take the lead in this area. In most cases, preservation tasks fell to a library, archive or other department. Close to half of respondents thought that the digital preservation function in their organizations was well organized, but a third were not satisfied and many were unsure.
Another key finding is that almost all institutions believe that digital preservation is understaffed. Organizations wanted almost twice the number of full‐time equivalents that they currently had. Most organizations are retraining existing staff to manage digital preservation functions rather than hiring new staff.
The survey also asked specifically about the desired qualifications for new digital preservation managers. Respondents believe that passion for digital preservation and a knowledge of digital preservation standards, best practices, and tools are the most important characteristics of a good digital preservation manager, not a particular educational background or past work experience.
Other findings from the survey showed that most organizations expected the size of their holdings to increase substantially in the next year. Twenty percent expect their current content to double. Images and text files are the most common types of content being preserved. Most organizations are performing the majority of digital preservation activities in‐house but many outsource some activities (digitization was the most common) and are hoping to outsource more.
The survey provides some useful baseline data about staffing needs, and the NDSA Standards and Practices Working Group recommends that the survey be repeated in two to three years to show change over time as digital preservation programs mature and as more organizations self‐identify as being engaged in digital preservation.
What do you think? We welcome your comments on the current report or any recommendations about the next iteration of the survey.
The reason why we should worry about preservation of digital content and why some preservation action needs to be done is closely related to the idea that content is at risk. The risk relates to the potential of losing something of value, weighted against the potential of gaining something of value. In digital preservation, the risk relates to losing long-term and continuous access (or usability) of content by the intended users and it is weighted against the cost (or profit) of maintaining such access. The long-term and continuous aspects of this access mean that there should be a continuous and long-term process that knows when content is misaligned with the requirements of the intended users, and this process is preservation watch.
In practice preservation watch becomes even more complex as long-term and continuous are many times conflicting requirements. To tackle this, an institution would normally define a "preservation format" which tries to fulfill the long-term access requirement, and create "access" or "dissemination" copies, which are optimized for user community.
Monitoring if content is aligned with the long-term and continuous access requirements, i.e. if selected preservation and access format are still adequate, is a big endeavor that quickly becomes infeasible with large-scale content. Institutions are normally able to tackle the usual suspects, like images and text documents, but are unable to process the long tail of file formats that almost all institutions have.
Scout – a preservation watch system
Scout is a preservation watch system being developed within the SCAPE project. It provides an ontological knowledge base to centralize all necessary information to detect preservation risks and opportunities. It uses plugins to allow easy integration of new sources of information, as file format registries, tools for characterization, migration and quality assurance, policies, human knowledge and others. The knowledge base can be easily browsed and triggers can be installed to automatically notify users of new risks and opportunities. Examples of such notification could be: content fails to conform to defined policies, a format became obsolete or new tools able to render your content are available.
For example, you can continuously monitor your content file formats and other characteristics, e.g. compression scheme. Scout can monitor your content profile throughout time and allow you to compare it with other institutions, see how content evolves and cross-reference that information with your policies, file format registries (like PRONOM), and any other information that can be provided to Scout.
This will give you an invaluable insight into your content and how it relates with the outside world.What information does Scout currently have?Content
Scout is able to monitor the content profile, which is a summary of the content characterization. Scout is fetching information about file format distribution, file size, and file characteristics like compression scheme. Scout does this using C3PO and FITS, you can run FITS on every file of your content to get the characterization output, and run C3PO to generate the content profile XML that can be monitored by Scout.
Here is an example of the data gathered form a web archive collection:
Internet Memory Foundation web archive collection (harvests of a confidential domain from 2009 to 2012)
Content size (on each harvest)
Format distribution (table with latest status and diagram with history on each harvest)
... and a long tail of other formats.
Compression scheme (on each harvest)
Scout allows upload of preservation control-policies in an RDF model created in the SCAPE project. Check the Preservation Policy Levels in SCAPE paper for more information about the Preservation Policy model. These control policies define requirements on the content that can be automatically checked for conformance with monitored content. For example, you can upload to Scout the a policy that defines that compression scheme must be lossless, and monitor your content to be warned whenever a lossy format is added to your content.
To add policies you have to log into Scout and add upload your policy RDF model in the Scout dashboard.
Note that the current version of Scout does not support multiple users (so you must download your own version of Scout to do this). Note also that not all policies can be checked for conformance, as they might depend on non-existing information, but you can add more information to Scout anytime (via source adaptors). Finally, please note that you might need to create new trigger that cross-references a control policy with the content profile, but default triggers and common vocabularies are currently being developed to make this cross-reference easier.Registries
Scout currently monitors PRONOM registry via the SPARQL endpoint. It currently has 843 file formats:
An experiment with Automatic Preservation Watch using Information Extraction on the Web was presented at the last iPRES conference (2013). In this experiment, the journals that are provided by a publisher are automatically extracted from the Web by doing focused crawlings on the Web (using journal and publisher names), and relations are calculated from natural language statements using information extraction tools.
In the experiment, 500,000 web pages that, with about 18 million sentences were crawled, and this resulted 2,000 journal titles and 500 journal-publisher relations. Comparing the results with eDepot and the Keepers registry gave the following results.
Comparing the results with eDepot we found that 86% of the gathered journal titles were not on the eDepot and should be added, 10% were already registered and 4% were false-positives. Manually comparing a sample of the results with the Keepers registry we also estimate that aroud 50% for all found journal-publisher relationships were already added, and 35% needed to be added. We also estimate that there exist more false-positives in the journal-publisher results because detecting journal(title)-publisher(name) relations is more complex and error prone that just detecting the journal titles.
This experiment demonstrates that information extraction technologies can be a good complement to registries and even serve as a substitute information source when no registries exist on some subject. Nevertheless, some work is needed to reduce the related error, several suggestions on how to do this are available on the paper.
How can I use Scout?
Plans exist to create a central instance for Scout, which could serve as a central hub for digital preservation information. For now, there is no such central instance, but you can check out the demonstration instance at http://scout.scape.keep.pt (please be aware that this is a development/demonstration site and may go down at any moment).
You can also download and install your own instance of Scout, gather information and monitor your content. To know how, check the development site: http://openplanets.github.io/scout/
Finally, you can send us your content profile and be an early adopter. To know more, please contact me at lfaria[AT]keep.pt
From the very beginning of the SCAPE project on, it was a requirement that the SCAPE Execution Platform be able to leverage functionality of existing command line applications. The solution for this is ToMaR, a Hadoop-based application, which, amongst other things, allows for the execution of command line applications in a distributed way using a computer cluster. This blog post describes the combined usage of a set of SCAPE tools for characterising and profiling web-archive data sets.
We decided to use FITS (File Information Tool Set) as a test case for ToMaR for two reasons: First, the FITS approach of producing “normalised” output on the basis of various file format characterisation tools makes sense, and therefore, enabling the execution of this tool on very large data sets will be of great interest for many people working in the digital preservation domain. Second, the application is challenging from a technical point of view, because it starts several tools as sub-processes. Even if a process takes only one second per file, we have to keep in mind that web archives usually have potentially billions of files to process.
The workflow in figure 1 is an integrated example of using several SCAPE outcomes in order to create a profile of web archive content. It shows the complete process from unpacking a web archive container file to viewing aggregated statistics about the individual files it contains using the SCAPE profiling tool C3PO:
Figure 1: Web Archive FITS Characterisation using ToMaR, available on myExperiment: www.myexperiment.org/workflows/3933
The inputs in this worklow are defined as follows:
“c3po_collection_name”: The name of the C3P0 collection to be created.
“hdfs_input_path”, a Hadoop Distributed File System (HDFS) path to a directory which contains textfile(s) with absolute HDFS paths to ARC files.
“num_files_per_invocation”: Number of items to be processed per FITS invocation.
“fits_local_tmp_dir”: Local directory where the FITS output XML files will be stored
The workflow uses the Map-only Hadoop job Spacip to unpackage the ARC container files into HDFS and creates input files which subsequently can be used by ToMaR. After merging the Mapper output files from Spacip into one single file (MergeTomarInput), the FITS characterisation process is launched by ToMaR as a MapReduce job. ToMaR uses an XML tool specification document which defines inputs, outputs and the execution of the tool. The tool specification document for FITS used in this experiment defines two operations, one for single file invocation, and the other one for directory invocation.
FITS comes with a command line interface API that allows a single file to be used as input to produce the FITS XML characterisation result. But if the tool were to be started from the command line for each individual file in large a web archive, the start-up time of FITS including its sub-processes would accumulate and result in a poor performance. Therefore, it comes in handy that FITS allows the definition of a directory which is traversed recursively to process each file in the same JMV context. ToMaR permits making use of this functionality by defining an operation which processes a set of input files and to produces a set of output files.
The question how many files should be processed per FITS invocation can be addressed by setting up a Taverna experiment like the one shown in Figure 2. The workflow presented above is embedded in a new workflow in order to generate a test series. A list of 40 values, ranging from 10 to 40 in steps of 10 files to be processed per invocation is given as input to the “num_files_per_invocation” parameter. Taverna will then automatically iterate over the list of input values by combining the input values as a cross product and launching 40 workflow runs for the embedded workflow.
Figure 2: Wrapper workflow to produce a test series.
5 ARC container files with a total size of 481 Megabytes and 42223 individual files were used as input for this experiment. The accumulated execution time was around 24 hours and led to the result shown in figure 3.
Figure 3: Execution time vs. number of files processed per invocation.
The experiment shows a range of values with the execution time stabilising at about 30 minutes. Additionally, the evolution of the execution time of the average and worst performing task is illustrated in figure 4 and can be taken into consideration to choose the right parameter value.
Figure 4: Average and worst performing tasks vs.number of files processed per invokation.
As a reference point, the 5 ARC files have been processed locally on one cluster node in a single-threaded application run in 8 hours and 55 minutes.
The cluster used in this experiment has one controller machine (Master) and 5 worker machines (Slaves). The master node has two quadcore CPUs (8 physical/16 HyperThreading cores) with a clock rate of 2.40GHz and 24 Gigabyte RAM. The slave nodes have one quadcore CPUs (4 physical/8 HyperThreading cores) with a clock rate of 2.53GHz and 16 Gigabyte RAM. Regarding the Hadoop configuration, five processor cores of each machine have been assigned to Map Tasks, two cores to Reduce tasks, and one core is reserved for the operating system. This is a total of 25 processing cores for Map tasks and 10 cores for Reduce tasks. The best execution time on the cluster was about 30 minutes which compares to the single-threaded execution time as illustrated in figure 5.
Figure 5: Single-threaded execution on one cluster node vs. cluster execution.
Processing larger data sets can be done in a similar manner to the one that is shown in figure 2, only that a list of input directory HDFS paths determines the sequence of workflow runs and the number of files per FITS invocation is set as a single fixed value.
The following screencast shows a brief demo of the workflow using a tiny arc file containing the harvest of an HTML page referencing a PNG image. It demonstrates how Taverna orchestrates the Hadoop jobs using tool service components.Taxonomy upgrade extras: SCAPEPreservation Topics: CharacterisationWeb ArchivingSCAPE
Mitch Fraas, Scholar in Residence at the Kislak Center for Special Collections, Rare Books, and Manuscripts at the University of Pennsylvania and Acting Director, Penn Digital Humanities Forum, writes about using Viewshare for mapping library book markings. We’re always excited to see the clever and interesting ways our tools are used to expose digital collections, and Mitch was gracious enough to talk about his experience with Viewshare in the following interview.
Erin: I really enjoyed reading about your project to map library book markings of looted books in Western Europe during the 1930s and 1940s. Could you tell us a bit about your work at the University of Pennsylvania Libraries with this collection?
Mitch: One of the joys of working in a research library is being exposed to all sorts of different researchers and projects. The Kislak Center at Penn is home to the Penn Provenance Project, which makes available photographs of provenance markings from several thousand of our rare books. That project got me thinking about other digitized collections of provenance markings. I’ve been interested in WWII book history for a while and I was fortunate to meet Kathy Peiss, a historian at Penn working in the field, and so hit upon the idea of this project. After the war, officials at the Offenbach collecting point for looted books took a number of photographs of book stamps and plates and made binders for reference. Copies of the binders can be found at the National Archives and Records Administration and the Center for Jewish History. For the set on Viewshare, I used the digitized NARA microfilm of the binders.
Erin: I was particularly excited to see that you used Viewshare as the tool to map the collection. What prompted your use of Viewshare and why did you think it would be a good fit for your project?
Mitch: Viewshare really made this project simple and easy to do. I first heard about it through the library grapevine maybe a year and half ago and started experimenting with it for some of Penn’s manuscript illuminations. I like the ease of importing metadata from delimited files like spreadsheets into Viewshare and the built-in mapping and visualization features. Essentially it allowed me to focus on the data and worry less about formatting and web display.
Erin: You mentioned that these photographs of the book markings are available through NARA’s catalog and that CJH has digitized copies of albums containing photos of the markings. Could you talk a little about the process of organizing the content and data for your view. For example, what kinds of decisions did you make with respect to the data you wanted to include in the view?
Mitch: This is always a difficult issue when dealing with visualizations. Displaying data visually is so powerful that it can obscure the choices made in its production and overdetermine viewer response. There are several thousand book markings from looted books held by NARA and the CJH but I chose just those identified in the 1940s as originating from “Germany.” Especially when mapping, I worried that providing a smattering of data from throughout the collection could be extremely misleading and wanted as tight a focus as possible. Even with this of course there are still many holes and elisions in the data. For example, my map includes book stamps from today’s Russia, Czech Republic, Hungary and Poland. These were of course part of the Third Reich at the time but book markings from those countries are found in many different parts of the albums as the officers at the Offenbach depot sorted book markings had separate “Eastern” albums largely based on language – so for these areas the map definitely shows only an extremely fragmentary picture.
Erin: We’ve found that users of Viewshare often learn things about their collections through the different views they build – maps, timelines, galleries, facets, etc. What was the most surprising aspect of the collection you learned through Viewshare?
Mitch: I have to admit to being surprised at the geographic distribution of these pre-war libraries. Though obviously there are heavy concentrations in large cities like Berlin, there are also an enormous variety of small community libraries spread throughout Germany represented in the looted books. I didn’t get a real sense for this distribution until I saw the Viewshare map for the first time.
Erin: Your project is an interesting example of using digitized data to do cross-border humanities research. Could you talk about some of the possibilities and challenges of using a visualization and access tool like Viewshare for exchanging data and collaborating with scholars around the world?
Mitch: Thanks to what I was able to do with Viewshare I got in touch with Melanie Meyers, a librarian at the CJH, and am happy to say that the library there is working on mapping all of the albums from the Offenbach collection. The easy data structure for Viewshare has allowed me to share my data with them and I hope that it can be helpful in providing a more complete picture of pre-war libraries and book culture.
Erin: Do you have any suggestions for how Viewshare could be enhanced to meet the diverse needs of scholars?
Mitch: Though easier said than done, the greatest need for improvement I see in Viewshare is in creating a larger user and viewer base. The images I use for my Viewshare collection are hosted via Flickr which has much less structured data functionality but has a built-in user community and search engine visibility. In short, I’d love to see Viewshare get all the publicity it can!
Who are you?
My name is Zeynep PEHLIVAN. I joined University Pierre and Marie Curie (UPMC) for a master degree at 2009. I recently received my PhD at the same university. I have been involved in the SCAPE project since September 2012.Tell us a bit about your role in SCAPE and what SCAPE work you are involved in right now?
I ensure the work package lead position for the Quality Assurance Components work package within the Preservation Components sub project under the supervision of Stéphane Gançarski and Matthieu Cord. In coming months, I will be also involved in the development of Quality Assurance tools for UPMC.Why is your organisation involved in SCAPE?
Our team at UPMC has been conducting research on digital preservation, especially on web archiving, since a while. As a university, participating in this project allows us to better evaluate the users’ real needs, to see how our research results are used in real life and to collaborate with different institutions.What are the biggest challenges in SCAPE as you see it?
I think, due to its size and its international position, a project like SCAPE will have several administrative challenges. However, above all, the most important challenges for me are the technical ones. There are so many useful tools developed in the project answering different issues related to digital preservation in different development environments. Integration of these tools into one single system is a big challenge but I think, today, through the last year of the project, we see the light at the end of the tunnel.
In addition, digital objects are ephemeral depending on different reasons. Taking this ephemeral nature into account while designing our solutions is another challenge. Although it is well studied in the project, we can not predict all issues based on ephemerality for the durability of the system.What do you think will be the most valuable outcome of SCAPE?
The size of the digital collections is getting larger each day. Thus, when we talk about digital collections, in fact, we refer to “big data”. As indicated also in the name of the project, scalability will be the most valuable outcome of the project, in my opinion.
Digital collections represent a huge information source. If access to these collections is not provided, unfortunately the preservation effort can ultimately become irrelevant. Previous works show that users of digital collections need to analyze, compare and evaluate the information. It will be interesting to see developed access tools to let users search, evaluate, and visualize these huge collections.Contact information
University Pierre and Marie Curie
4 Place Jussieu, 75005 Boite 169SCAPE
If you’ve ever been to a warehouse store on a weekend afternoon, you’ve experienced the power of the sample. In the retail world, samples are an important tool to influence potential new customers who don’t want to invest in an unknown entity. I certainly didn’t start the day with lobster dip on my shopping list but it was in my cart after I picked up and enjoyed a bite-sized taste. It was the sample that proved to me that the product met my requirements (admittedly, I have few requirements for snack foods) and fit well within my existing and planned implementation infrastructure (admittedly, not a lot of thought goes into my meal-planning) so the product was worth my investment. I tried it, it worked for me and fit my budget so I bought it.
Of course, samples have significant impact far beyond the refrigerated section of warehouse stores. In the world of digital file formats, there are several areas of work where sample files and curated groups of sample files, which I call test sets, can be valuable.
The spectrum of sample files
Sample files are not all created equal. Some are created as perfect ideal example of the archetypal golden file, some might have suspected or confirmed errors of varying degrees while still others are engineered to be non-conforming or just plain bad. Is it always an ideal “golden” everything-works-perfectly example or do less-than-perfect files have a place? I’d argue that you need both. It’s always good to have a valid and well-formed sample but you often learn more from non-conforming files because they can highlight points of failure or other issues.
Oliver Morgan of MetaGlue, Inc., an expert consultant working with the Federal Agencies Digitization Guidelines Initiative AV Working Group on the MXF AS-07 application specification has developed the “Index of Metals” scale for sample files created specifically for testing purposes during the specification drafting process which range from gold (engineered to be good/perfect) to plutonium (engineered poisonous).
Ideally, the file creator would have the capability and knowledge to make files that conform to specific requirements so they know what’s good, bad and ugly about each engineered sample. Perhaps equally as important as the file itself is the accompanying documentation which describes the goal and attributes of the sample. Some examples of this type of test set are the Adobe Acrobat Engineering PDF Test Suites and Apple’s Quicktime Sample Files.
Of course, not all sample files are planned out and engineered to meet specific requirements. More commonly, files are harvested from available data sets, web sites or collections and repurposed as de facto digital file format sample files. One example of this type of sample set is Open Planet’s Format Corpus. These files can be useful for a range of purposes. Viewed in the aggregate, these ad hoc sample files can help establish patterns and map out structures for format identification and characterization when format documentation or engineered samples are either deficient or lacking. Conversely, these non-engineered test sets can be problematic especially when they deviate from the format specification standard. How divergent from the standard is too divergent before the file is considered fatally flawed or even another file format?
Audiences for sample files
In the case of specification drafting, engineered sample files can be useful not only as part of a feedback loop for the specification authors to highlight potential problems and omissions in the technical language, but sample files may be valuable later on to manufactures and open-source developers who want to build tools that can interact with the file type to produce valid results.
At the Library of Congress, we sometimes examine sample files when working on the Sustainability of Digital Formats website so we can see with our own eyes how the file is put together. Reading specification documentation (which, when it exists, isn’t always as comprehensive as one might wish) is one thing but actually seeing a file through a hex viewer or other investigative tool is another. The sample file can clarify and augment our understanding of the format’s structure and behavior.
Other efforts focusing on format identification and characterization issues, such as JHOVE and JHOVE2, the National Archives UK’s DROID, OPF’s Digital Preservation and Data Curation Requirements and Solutions and Archive Team’s Let’s Solve the File Format Problem, have a critical need for format samples, especially when other documentation about the format is incomplete or just plain doesn’t exist. Sample files, especially engineered test sets, can help efforts such as NARA’s Applied Research and their partners establish patterns and rules, including identifying magic numbers which are an essential component to digital preservation research and workflows. Format registries like PRONOM and UDFR rely on the results of this research to support digital preservation services.
Finally, there are the institutional and individual end users who might want to implement the file type in their workflows or adopt it as a product but first, they want to play with it a bit. Sample files can help potential implementers understand how a file type might fit into existing workflows and equipment, how it might compare on an information storage level with other file format options as well as help assess the learning curve for staff to understand the file’s structure and behavior? Adopting a new file format is no small decision for most institutions so the sample files allow technologists to evaluate if a particular format meets their needs and estimate the level of investment.
Crossing the River: An Interview With W. Walker Sampson of the Mississippi Department of Archives and History
The following is a guest post by Jefferson Bailey, Strategic Initiatives Manager at Metropolitan New York Library Council, National Digital Stewardship Alliance Innovation Working Group co-chair and a former Fellow in the Library of Congress’s Office of Strategic Initiatives.
Regular readers of The Signal will no doubt be familiar with the Levels of Digital Preservation project of the NDSA. A number of posts have described the development and evolution of the Levels themselves as well as some early use cases. While the blog posts have generated excellent feedback in the comments, the Levels team has also been excited to see a number of recent conference presentations that described the Levels in use by archivists and other practitioners working to preserve digital materials. To explore some of the local, from the trenches narratives of those working to develop digital preservation policies, resources and processes, we will be interviewing some of the folks currently using the Levels in their day-to-day work. If you are using the Levels within your organization and are interested in chatting about it, feel free to contact us via our email addresses listed on the project page linked above.
In this interview, we are excited to talk with W. Walker Sampson, Electronic Records Analyst, Mississippi Department of Archives and History.
JB: Hi Walker. First off, tell us about your role at the Mississippi Department of Archives and History and your day-to-day activities within the organization.
WS: I’m officially an ‘electronic records analyst’ in our Government Records section. It’s a new position at the archives so my responsibilities can vary a bit. While I deliver electronic records management training to government employees, I do most of my work in and with the Electronic Archives group here. This ranges from electronic records processing to a number of digital initiatives – Flickr, Archive-It and I think most importantly a reconsideration of our digital repository structure.
JB: What are some of the unique challenges to working on digital preservation within a state agency, especially one that “collects, preserves and provides access to the archival resources of the state, administers museums and historic sites and oversees statewide programs for historic preservation, government records management and publications”? That is a diverse set of responsibilities!
WS: It is! Fortunately for us, those duties are allocated to different divisions within the department. Most of the digital preservation responsibilities are directed to the Archives and Records Services division.
The main challenges here are twofold: a large number of records creators – over two hundred state agencies and committees, and following that, a potentially voluminous amount of born-digital records to process and maintain. I suppose however that this latter challenge may not be a unique to state archives.
I would also say that governance is a perennial issue for us, as it may be for a number of state archives. That is, it can be difficult to establish oversight for any state organization’s records at any given point of the life cycle. According to our state code we have a mandate to protect and preserve, but this does not translate into clear actions that we can take to exercise oversight.
JB: How have MDAH’s practices and workflows evolved as the amount of digital materials it collects and preserves has increased?
WS: MDAH is interesting because we started an electronic archives section relatively early, in 1996. We were able to build up a lot of the expertise in house to process electronic records through custom databases, scripts and web pages. This initiative was put together before I began working here, but one of my professors in the School of Information at UT Austin, Dr. Patricia Galloway, was a big part of that first step.
Since then the digital preservation tool or application ‘ecosystem’ has expanded tremendously. There’s an actual community with stories, initiatives, projects and histories. However, we mostly do our work with the same strategy as we began – custom code, scripts and pages. It has been difficult to find a good time to cross the river and use more community-based tools and workflows. We have an immense amount of material that would need to be moved into any new system, and one can find different strata of description and metadata formatting practices over time.
I think that crossing will help us handle the increasing volume, but I also think this big leap into a community-based software (Archivematica, DSpace and so on) will give us an opportunity to reconsider how digital records processing and management happens.
JB: Having seen your presentation at the SAA 2013 conference during the Digital Preservation in State and Territorial Archives: Current State and Prospects for Improvement panel, I was very interested in your discussion of using the Level of Digital Preservation as part of a more comprehensive self-assessment tool. Tell us both about your overall presentation and about your use of the Levels.
WS: I should start by just covering briefly the Digital Preservation Capability Maturity Model. This is a digital preservation model developed by Lori Ashley and Charles Dollar, and it is designed to be a comprehensive assessment of a digital repository. The intention is to analyze a repository by its constituent parts, with organizations then investigating each part in turn to understand where their processes and policies should be improved. It is up to the particular organization to prioritize what aspects are most relevant or critical to them.
The Council of State Archivists developed a survey based off this model, and all state and territorial archives took that survey in 2011. The intention here was to try and get an accurate picture of where preservation of authentic digital records stands across the country’s state archives.
This brings us to the SAA 2013 presentation. I presented MDAH’s background and follow-up to this survey along with two other state archives, Alabama and Wyoming. In my portion I highlighted two areas for improvement for us here in Mississippi, the first being policy and the second technical capacity.
Although the Levels of Digital Preservation are meant to advise on the actual practice of preservation, we have looked at the chart as a way to articulate policy. The primary reason for this is because the chart really helps to clarify at least some of what we are protecting against. That helps communicate why a body like the legislature ought to have a stake in us.
For example, when I look across the Storage and Geographic Location row of the chart, I’m closer to communicating what we should say in a storage section of a larger digital preservation policy. It’s easier for me to move from “MDAH will create backup copies of preserved digital content” to “MDAH will ensure the strategic backup of digital content which can protect against internal, external and environmental threats,” or something to that effect.
Second, I think the chart can help build internal consensus on what our preservation goals are, and what the basic preservation actions should be, independent of any specific technology. Those are important prerequisites to a policy.
Last, and I think this goes along with my second point, I don’t think policies come out of nowhere. In other words, while it strikes me that some part of a policy should be aspirational, for the most part we want to deliver on our stated policy goals. The chart has helped to clarify what we can and can’t do at this point.
JB: Using the Levels within a larger preservation assessment model is an interesting use case. What specific areas of the DPCMM did the Levels help address? The DPCMM is a much more extensive model and focuses more on self-assessment and ranking, whereas the Levels establish accepted practices at numerous degrees. What were the benefits or drawbacks of using these two documents together?
WS: Besides helping to demonstrate some policy goals, I think the Levels apply most directly to objectives in Digital Preservation Strategy, Ingest, Integrity and Security. There’s some significant overlap in content there, in terms of fixity checks, storage redundancy, metadata and file playback. When you look at the actual survey (a copy of this online somewhere…?), they recommend generally similar actions. I think that’s a good indication of consensus in the digital preservation community, and that these two resources are on target.
While I don’t think there’s a marked drawback to using the two documents together – I’ve haven’t spotted any substantive differences in their preservation advice where their subject areas overlap – one does have to keep in mind the more narrowed scope of the Levels. In addition, the DPCMM has the OAIS framework as one of its touchstones, so you find ample reference to SIPs, DIPs, AIPs, designated communities and other OAIS concepts. The Levels of Digital Preservation are not going to explicitly address those expectations.
JB: One aspect of the Levels that has been well received is the functional independence of the boxes/blocks. An individual or institution can currently be at different levels in different activity areas of the grid. I would be interested to hear how this aspect helped (or hindered) the document’s use in policy development specifically.
WS: I think it’s been very helpful in formulating policy. The functional independence of the levels lets the chart identify more preservation actions than it might otherwise. While some of those actions won’t ever be specifically articulated in a policy, some certainly will.
For example the second level of the File Formats category – “Inventory of file formats in use” – is probably not going to be expressed in a policy, levels 3 and 4 may though. It isn’t necessarily the case though that higher levels correlate to policy material however. For instance level 1 for Information Security is really more applicable to a policy statement than the level 4 action.
JB: One of the goals of the Levels of Preservation project is to keeps its guidance clear and concise, while remaining sensitive to the varied institutional contexts in which the guidance might be used. I would be interested to hear how this feature informed the self-assessment process.
WS: Similar to the functional independence, I think it’s a great feature. The Levels don’t present a monolithic single-course track to preservation capacity, so it doesn’t have to be dismissed entirely in the case that some actions don’t really apply. That said, I felt like really all the actions applied to us quite well, so I think we’re well within the target audience for the document.
The DPCMM really shares this feature. Although it’s meant to help an institution build to trustworthy repository status, it’s not a linear recommendation where an organization is expected to from one component section to the next. The roadmap would change considerably from one institution to the next.
The December 2013 issue of the Library of Congress Digital Preservation newsletter (pdf) is now available!
- Beyond the Scanned Image: Scholarly Uses of Digital Collections
- Ten Tips to Preserve Holiday Digital Memories
- Anatomy of a Web Archive
- Updates on FADGI: Still Image and Audio Visual
- Guitar, Bass, Drums, Metadata
- Upcoming events: CNI meeting, Dec 9-10; NDSA Regional meeting, Jan 23-24; Ala Midwinter, Jan 24-28; CurateGear, Jan 8; IDCC, Feb 24-27.
- Conference report on Best Practices Exchange
- Insights Interview with Brian Schmidt
- Articles on personal digital archiving, residency program, and more
To subscribe to the newsletter, sign up here.
More than 20 developers visited the ‘Hadoop-driven digital preservation Hackathon’ in Vienna which took place in the baroque room called "Oratorium" of the Austrian National Library from 2nd to 4th of December 2013. It was really exciting to hear people vividly talking about Hadoop, Pig, Hive, HBase followed by silent phases of concentrated coding accompanied by the background noise of mouse clicks and keyboard typing.2. Dezember 2013
There were Hadoop newbies, people from the SCAPE Project with some knowledge about Apache Hadoop related technologies, and, finally, Jimmy Linn who works currently as an associate professor at the University of Maryland and who was employed as chief data scientist at Twitter before. There is no doubt that his profound knowledge of using Hadoop in an ‘industrial’ big data context was that certain something of this event.
The topic of this Hackathon was large-scale digital preservation in the web archiving and digital books quality assurance domains. People from the Austrian National Library presented application scenarios and challenges and introduced the sample data which was provided for both areas on a virtual machine together with a pseudo-distributed Hadoop-installation and some other useful tools from the Apache Hadoop ecosystem.
I am sure that Jimmy’s talk about Hadoop was the reason why so many participants became curious about Apache Pig, a powerful tool which was humorously characterised by Jimmy as the tool for lazy pigs aiming for hassle-free MapReduce. Jimmy gave a live demo running some pig scripts on the cluster at his university explaining how pig can be used to find out which links point to each web page in a web archive data sample from the Library of Congress. Asking Jimmy about his opinion on Pig and Hive as two alternatives for data science to choose from, I found it interesting that he did not seem to have a strong preference for Pig. If an organisation has a lot of experienced SQL experts, he said, Hive is a very good choice. On the other hand, from the perspective of the data scientist, Pig offers a more flexible, procedural approach for manipulating data and to do data analysis.
Towards the end of the last day we started to split into several groups. People gathered ideas in a brainstorming session which at the end led to several groups:
Many participants started Pig scripting during the event, so it is clear that one cannot expect code that is ready to be used in a production environment, but we can see many points to start from when we do the planning of projects with similar requirements.
On the second day, there was another talk by Jimmy about HBase and his project WarcBase which looks like a very promising approach of providing scalable infrastructure using HBase with a very responsive user interface that offers basic functionality of what the WayBack machine offers for rendering ARC and WARC web archive container files. In my opinion, the upside of his talk was to see HBase as tremendously powerful database on top of Hadoop’s distributed file system (HDFS), Jimmy brimming over with ideas about possible use cases for scalable content delivery using HBase. The downside was to hear his experiences about how complex the administration of a large HBase cluster can actually become. First, additionally to the Hadoop administration, there are additional daemons (ZooKeeper, RegionServer) whichwe must keep up running, and he explained how the need for compacting data stored in HFiles, once you believe that the HBase cluster is well balanced, can lead to what the community calls “compaction storm” ending up by blowing-up your cluster which luckily only manifests itself with endless java stack-traces.
One group provided a full text search for WarcBase and they picked up the core ideas from the developer groups and presentations to build a cutting-edge environment where the web archive content was indexed by the Terrier search engine and the index was enriched with metadata from the Apache Tika mime-type and language detection. There were two ways to add metadata to the index. The first option was to run a pre-processing step that uses Pig user defined function to output the metadata of each document. The second option was to use Apache Tika during indexing to detect both the MimeType and language. In my view, this group has won the price of the fanciest set-up, sharing resources and daemons running on their laptops.
I must say, that I was especially happy about the largest working group where outcomes were dynamically shared between developers: One implemented a Pig user defined function (UDF) making use of Apache Tika’s language detection API (see section MIME type detection) which the next developer used in a Pig script for mime type and language detection. Also Alan Akbik, SCAPE project member, computer linguist and Hadoop researcher from the University of Berlin, was reusing building blocks from this group to develop Pig Scripts for old German language analysis using dictionaries as a means to determine the quality of noisy OCRed text. As an experienced Pig scripter he produced impressive results and deservedly won the Hackathon’s competition for the best presentation of outcomes.
The last group was experimenting with functionality of classical digital preservation tools for file format characterisation and identification, like Apache Tika, Droid, and Unix file, and looking into ways to improve the performance on the Hadoop platform. It’s worth highlighting that digital preservation guru Carl Wilson found a way to replace the command line invocation of unix file in FITS by a Java API invocation which proved to be ways more efficient.
Finally, Roman Graf, researcher and software developer from the Austrian Institute of Technology, took images from the Austrian Books Online project in order to develop python scripts which can be used to detect page cropping errors and which were especially designed to run on a Hadoop platform.
On the last day, we had a panel session with people talking about experiences regarding the day-to-day work with Hadoop clusters and the plans that they have for the future of their cluster infrastructure.4. Dezember 2013
I really enjoyed these three days and I was impressed by the knowledge and ideas that people brought to this event.Preservation Topics: Preservation ActionsIdentificationCharacterisationWeb ArchivingToolsSCAPE
This is part two of the Content Matters interview series interview with Diane Papineau, a geographic information systems analyst at the Montana State Library.
Part one was yesterday, December 5, 2013.
Butch: What are some of the biggest digital preservation and stewardship challenges you face at the Montana State Library?
Diane: The two biggest challenges seem to be developing the inventory system and appraising and documenting 25 years of clearinghouse data. MSL is developing the GIS inventory system in-house—we are fortunate that our IT department employs a database administrator and a web developer tasked with this work. The system is in development now and its design is challenging. The system will record not just our archived data, but the Dissemination Information Packages created to serve that data (zipped files, web map services, map applications, etc.) and the relationships between them. For data records alone, we’re wrestling with how to accommodate 13 use cases (data forms and situations), including accommodating parent/child relationships between records. Add to this that we are anxious to be up and running with a sustainable system and the corresponding data discovery tools as we simultaneously appraise and document the clearinghouse data before archiving.
We have archiving procedures in place for the frequently-changing datasets we produce (framework data). However, the existing large collection of clearinghouse data presents a greater challenge. We’re currently organizing clearinghouse data that is actively served and data that’s been squirreled away on external drives, staff hard drives, and even CDs. Much of the data is copies or “near copies” and many original datasets do not have metadata. We need to review the data and document it and for the copies, decide which to archive and which to discard.
When I think of the work ahead of us, I’m reminded of something I read in the GeoMAPP materials. The single most important thing GIS organizations can do to start the preservation process is to organize what they have and document it.
Butch: How have the technologies of digital mapping changed over the past five years? How have those changes affected the work you do?
Diane: The influence of the internet is important to note. Web programmers and lay people are now creating applications and maps using live map services that we make available for important datasets. These are online, live connections to select map data, making mapping possible for people who are not desktop GIS users. With online map makers accessing only a subset of our data (the data provided in these services), we note that they may not make use of the full complement of data we offer. Also, we notice that our patrons are more comfortable these days working with spatial databases, not just shapefiles. This represents a change in patron download data selection, but it would not affect our data and map protocols.
Technology gaining popularity that may assist our data management and archiving include scripting tools like Python. We anticipate that these tools will help us automate our workflow when creating DIPs, generating checksums, and ingesting data into the archive.
Butch: At NDIIPP we’ve started to think more about “access” as a driver for the preservation of digital materials. To what extent do preservation considerations come into play with the work that you do? How does the provision of enhanced access support the long-term preservation of digital geospatial information?
Diane: MSL is in the process of digitizing its state publications holdings. Providing easier public access to them was a strong driver for this effort. Web statistics indicated that once digitized, patron access to a document can go up dramatically.
Regarding our digital geographic data, we have a long history of providing online access to this data. Our current efforts to gain physical and intellectual control over these holdings will reveal long-lost and superseded data that we’ll be anxious to make available given our mandate to provide permanent public access. It may be true that patron access to all of our inventoried holdings may result in more support for our GIS programs, but we’ll be preserving the materials and providing public access regardless.
Butch: How widespread is an awareness of digital stewardship and preservation issues in the part of the geographic community in which the Montana State library operates?
Diane: MSL belongs to a network of professionals who understand and value GIS data archiving and who can be relied on to support our efforts with GIS data preservation. That said, these supportive state agencies and local governments may be in a different position with regard to accomplishing their own data preservation. They are likely wrestling with not having the financial and staff resources or perhaps the policies and administrative level support for implementing data preservation in their own organizations. It’s also quite likely that their business needs are focused on today’s issues. Accommodating a later need for data may be seen as less important. The Montana Land Information Advisory Council offers a grant for applicants wanting to write their metadata and archive their data. To date there have been no applicants.
Beyond Montana, I’ve delivered a GIS data preservation talk at two GIS conferences in New England this year. The information was well-received and engagement in these sessions was encouraging. Two New England GIS leaders with similar state data responsibilities showed interest in how Montana implemented archiving based on GeoMAPP best practices.
Butch: Any final thoughts about the general challenges of handling digital materials within archival collections?
Diane: By comparison to the technical hurdles a GIS shop navigates every day, the protocols for preserving GIS data are pretty straight forward. Either the GIS shop packages and archives the data in house or the shop partners with an official archiving agency in their state. For GIS organizations, libraries, and archives interested in GIS data preservation, there are many guiding documents available. Start exploring these materials using the NDSA’s draft Geospatial Data Archiving Quick Reference document (pdf).
- A few minor performance optimisations
- The possibility to run FITS in a nailgun server
- Droid updated to version 6
- Apache Tika enhancements
- Numerous bug fixes
- Better error reporting
In this installment of the Content Matters interview series of the National Digital Stewardship Alliance Content Working Group we’re featuring an interview with Diane Papineau, a geographic information systems analyst at the Montana State Library.
Diane was kind enough to answer questions, in consultation with other MSL staff and the state librarian, Jennie Stapp, about the MSL’s collecting mission, especially in regards to their geospatial data collections.
This is part one of a two part interview. The second part will appear tomorrow, Friday December 6, 2013.
Butch: Montana is a little unusual in that the geospatial services division of the state falls under the Montana State Library. How did this come about and what are the advantages of having it set up this way.
Diane: In addition to a traditional role of supporting public libraries and collecting state publications, the Montana State Library (MSL) hosts the Natural Resource Information System (NRIS), which is staffed by GIS Analysts.
NRIS was established by the Montana Legislature in 1983 to catalog the natural resource and water information holdings of Montana state agencies. In 1987, NRIS gained momentum (and funding) from the federal Environmental Protection Agency and Montana Department of Health and Environmental Sciences to support their mining clean-up work on the Superfund sites along the Clark Fork River between Butte and Missoula. This project generated a wealth of GIS data such as work area boundaries, contaminated area locations, and soil sampling sites, which NRIS used to make a multitude of maps for reports and project management. Storing the data and resulting maps at MSL made sense because it is a library and therefore a non-regulatory, neutral agency. Making the maps and data available via a library democratized a large collection of timely and important geographic information and minimized duplication of effort.
GIS was first employed at NRIS in 1987; from that point forward, NRIS functioned as the state’s GIS data clearinghouse, generating and collecting GIS data. NRIS operated for a decade essentially as a GIS service bureau for state government; during this period, NRIS grew into a comprehensive GIS facility, unique among state libraries. In fact, in the mid-1990s, NRIS participated in the first national effort to provide automated search and retrieval of map data. Today, beyond data clearinghouse activities, MSL is involved with state GIS Coordination as well as GIS leadership and education. We also are involved with data creation or maintenance for 10 of the 15 framework datasets (cadastral, transportation, hydrography, etc.) for Montana, and also host a GIS data archive, thanks to our participation as a full partner in the Geospatial Multistate Archive and Preservation Partnership (GeoMAPP)—a project of the National Digital Stewardship Alliance (NDSA).
Butch: Give us an example of some of the Montana State Library digital collections. Any particularly interesting digital mapping collections?
Diane: Our most important digital geographic collection is the full collection of GIS clearinghouse data gathered over the past 25 years. The majority of this data is “born digital” content made available for download and other types of access via our Data List. Within that collection, one of our most sought-after datasets is the Montana Cadastral framework—a statewide dataset of private land ownership illustrated by tax parcel boundaries. The dataset is updated monthly and is offered for download and as a web map service for desktop GIS users and online mapping. We have stored periodic snapshots of this dataset as it has changed through time and we also serve the most recent version of the data via the online Montana Cadastral map application. The map application makes this very popular data accessible to those without desktop GIS software or training in GIS. Another collection to note is our Clark Fork River superfund site data, which may prove invaluable at some point in the future.
In terms of an actual digital map series, our Water Supply/Drought maps come to mind. For at least 10 years now, NRIS has partnered with the Montana Department of Natural Resources and Conservation (DNRC) to create statewide maps illustrating the soil moisture conditions in Montana by county. DNRC supplies the data; NRIS creates the map and maintains the website that serves the collection of maps through time.
Butch: Tell us a bit about how the collection is being (or might be) used. To what extent is it for the general public? To what extent is it for scholars and researchers?
Diane: Our GIS data collection serves the GIS community in Montana and beyond. Users could be GIS practitioners working on land management issues or city/county planning for example. Other collections, such as our land use and land cover datasets and our collection of aerial photos, may be of particular interest to researchers. The general public also utilizes this data; because of phone inquiries we receive, we know that hunters, for example, frequently access the cadastral data in order to obtain landowner permission to hunt on private lands. Though we don’t track individual users due to requirements of library confidentiality, we know that the uses for this collection are virtually limitless.
The general public can access much of the geographic data we serve by using our online mapping applications. For example, patrons can use the Montana Cadastral application that I mentioned plus tools like our Digital Atlas to see GIS datasets for their area of interest. They can use our Topofinder to view topographic maps online or to find a place when, for example, all that’s known is the location’s latitude and longitude. In 2008, in partnership with the Montana Historical Society, we published the Montana Place Names Companion—an online map application that helps patrons to learn the name origin and history of places across the state.
Butch: What sparked the Montana State Library to join the National Digital Stewardship Alliance?
Diane: While we’ve played host to this large collection of GIS data and we have long been recognized as the informal GIS data archive for the state, we had yet to maintain an inventory of our holdings. Thankfully, we never threw data out.
We realized that in order to gain physical and intellectual control over this collection of current and superseded data, we needed to modernize our approach. The timing couldn’t have been better because it coincided with the concluding phase of GeoMAPP. In 2010 MSL participated as an Information Partner, beginning our exposure to formal GIS data archiving issues. Then in 2011, MSL joined GeoMAPP as the project’s last Full Partner. This partnership permitted us to envision applying archivists’ best practices while we reworked and modernized our data management processes.
In some ways we were the GeoMAPP “guinea pig” and we are grateful for that role—so much research had already been done by the other partners and so much information was already available. In return, what MSL could offer to this group was the perspective of three important GeoMAPP target audiences: libraries, archives, and GIS shops.
Butch: Tell us about some of the archiving practices that the Montana State Library has defined as a result of its partnership with GeoMAPP and the National Digital Stewardship Alliance. Why is preservation important for GIS data?
Diane: I’ll start with the “why.” GIS data creation is expensive. By preserving geographic data via archiving, we store that investment of time and money. GIS data is often used to create public policy. Montana has incredibly strong “right to know” laws so preserving data that was once available to decision makers supports later inquiry about current laws and policies. Furthermore, making superseded data discoverable and accessible promotes historically-informed public policy decisions, wise land use planning, and effective natural disaster planning to name just a few use cases. From a state government perspective, the published GIS datasets created by state agencies are considered state publications. Our agency is statutorily mandated to preserve state publications and make them permanently accessible to the public.
To guide us in this modernization, MSL developed data management standards, policies, and procedures that require data preservation using archivists’ best practices. I’ll discuss a few highlights from these standards that illustrate our particular organizational needs as a GIS data collector and producer.
In order to appeal to the greater GIS community in Montana, we decided to use more GIS-friendly terms in place of the three “package” terms from the OAIS model. We think of a Submission Information Package (SIP) as “working data,” a Dissemination Information Package (DIP) as a Published Data Package, and an Archive Information Packages (AIP), as an Archive Data Package.
MSL chose to take a “library collection development policy” approach to managing a GIS data collection rather than a “records management” approach, which makes use of records retention schedules. What this means is we’re on the lookout for data we want to collect—appraisal happens at the point of collection. If we take the data, we both archive it (creating an AIP) and make DIPs at the same time. The archive is just another data file repository, though a special one with its own rules. If the data acquired is not quite ready for distribution, we modify it from a SIP (our “working data”) to make it publishable. We do not archive the SIP.
We’re employing the library discipline’s construct of series’ and collections and their associated parent/child metadata records, which is new to the GIS group here at MSL. In turn, that decision influenced the file structure of our archive. Though ISO topic categories were GeoMAPP suggestions for both data storage as well as for data discovery, MSL chose instead to organize archive data storage by the time period of content unless the data is part of a series (i.e. cadastral) or if it was generated as part of a discrete project and is considered a collection (i.e the Superfund data). Additional consistency and structure should also come from the use of a new file naming convention (<extent><theme><timeframe>).
MSL is archiving data in its original formats rather than converting all data to an archival format (i.e. shapefile) because each data model offers useful spatial characteristics that we did not want to strip from the archived copy. For archive data packaging, we use the Library of Congress tool “Bagger” and we specifically chose to zip all the associated files together before “bagging” to save space in the archive. Zipping the data also permits us to produce one checksum for the entire package, which simplifies dataset management and dataset integrity checking in the workflow. We decided not to use Bagger’s zip function for this because the resulting AIP produced an excessively deep file structure, burying the data in multiple folder levels. To document the AIP in our data management system, we’ve established new archive metadata fields such as date archived, checksum, data format, and data format version.
Part two of this interview will appear tomorrow, Friday December, 2013.
Recently, the world of web archiving has been a busy one. Here are some quick updates:
- The National Library of Estonia released the Estonian Web Archive to the public. This is of particular note because the Legal Deposit Law in Estonia allows the archive to be publicly accessible online. If you read Estonian you can browse the 1003 records that make up the 1.6 TB of data in the archive. A broad crawl of the entire Estonian domain is planned in 2014.
- Ed Summers from the Library of Congress gave the keynote address at the National Digital Forum in New Zealand titled The Web as Preservation Medium. Ed is a software developer and offers a great perspective into some technical aspects of preserving the Web. He covers the durability of HTML, the fragility of links, how preservation is interlaced with access, the importance of community action and the value of “small data”.
- The International Internet Preservation Consortium 2014 General Assembly will be held at the Bibliothèque nationale de France in Paris May 19-13, 2014. There is still a little time to submit a proposal to speak at the public event on May 19th titled Building Modern Research Corpora: the Evolution of Web Archiving and Analytics.
Libraries, archives and other heritage or scientific organizations have been systematically collecting web archives for over 15 years. Early stages of web archiving projects were mainly focused on tackling the challenges of harvesting web content, trying to capture an interlinked set of documents, and to rebuild its different layers through time. Institutions, especially those on a national level, were also defining their legal and institutional mandates. Meanwhile, approaches to web studies developed and influenced researchers’ and academics’ use of web archives. New requirements have emerged. While the objective of building generic collections remains valid, web archiving institutions and researchers also need to collaborate in order to build specific corpora – from the live web or from web archives.
At the same time, “surfing the web the way it was” is no longer the only way of accessing archived web content. Methods developed to analyse large data sets – such as data or link mining – are applicable to web archives. Web archive collections can thus be a component of major humanities and social sciences projects and infrastructures. With relevant protocols and tools for analysis, they will provide invaluable knowledge of modern societies.
This conference aims to propose a forum where researchers, librarians, archivists and other digital humanists will exchange ideas, requirements, methods and tools that can be used to collaboratively build and exploit web archive corpora and data sets. Contributions are sought that will present:
- models of collaboration between archiving institutions and researchers,
- methods and tools to perform data analytics on web archives,
- examples of studies performed on web archives,
- alternative ways of archiving web content.
Abstracts (no longer than one page) should be sent to Peter Stirling (peter dot stirling at bnf dot fr) by Friday December 9, 2013. Full details are available at the IIPC website.