Planet DigiPres

Mapping the Movement of Books Using Viewshare: An Interview with Mitch Fraas

The Signal: Digital Preservation - 13 December 2013 - 4:07pm

Mitch Fraas, Scholar in Residence at the Kislak Center for Special Collections, Rare Books, and Manuscripts at the University of Pennsylvania and Acting Director, Penn Digital Humanities Forum, writes about using Viewshare for mapping library book markings.  We’re always excited to see the clever and interesting ways our tools are used to expose digital collections, and Mitch was gracious enough to talk about his experience with Viewshare in the following interview.

Offenbach Library Marks View, created by Mitch Fraas

Offenbach Library Marks View, created by Mitch Fraas.

Erin:  I really enjoyed reading about your project to map library book markings of looted books in Western Europe during the 1930s and 1940s.  Could you tell us a bit about your work at the University of Pennsylvania Libraries with this collection?

Mitch: One of the joys of working in a research library is being exposed to all sorts of different researchers and projects. The Kislak Center at Penn is home to the Penn Provenance Project, which makes available photographs of provenance markings from several thousand of our rare books. That project got me thinking about other digitized collections of provenance markings. I’ve been interested in WWII book history for a while and I was fortunate to meet Kathy Peiss, a historian at Penn working in the field, and so hit upon the idea of this project. After the war, officials at the Offenbach collecting point for looted books took a number of photographs of book stamps and plates and made binders for reference. Copies of the binders can be found at the National Archives and Records Administration and the Center for Jewish History. For the set on Viewshare, I used the digitized NARA microfilm of the binders.

Erin: I was particularly excited to see that you used Viewshare as the tool to map the collection. What prompted your use of Viewshare and why did you think it would be a good fit for your project?

Mitch: Viewshare really made this project simple and easy to do. I first heard about it through the library grapevine maybe a year and half ago and started experimenting with it for some of Penn’s manuscript illuminations. I like the ease of importing metadata from delimited files like spreadsheets into Viewshare and the built-in mapping and visualization features. Essentially it allowed me to focus on the data and worry less about formatting and web display.

An individual item record from the Offenbach View.

An individual item record from the Offenbach View.

Erin: You mentioned that these photographs of the book markings are available through NARA’s catalog and that CJH has digitized copies of albums containing photos of the markings. Could you talk a little about the process of organizing the content and data for your view. For example, what kinds of decisions did you make with respect to the data you wanted to include in the view?

Mitch: This is always a difficult issue when dealing with visualizations. Displaying data visually is so powerful that it can obscure the choices made in its production and overdetermine viewer response. There are several thousand book markings from looted books held by NARA and the CJH but I chose just those identified in the 1940s as originating from “Germany.” Especially when mapping, I worried that providing a smattering of data from throughout the collection could be extremely misleading and wanted as tight a focus as possible. Even with this of course there are still many holes and elisions in the data. For example, my map includes book stamps from today’s Russia, Czech Republic, Hungary and Poland. These were of course part of the Third Reich at the time but book markings from those countries are found in many different parts of the albums as the officers at the Offenbach depot sorted book markings had separate “Eastern” albums largely based on language – so for these areas the map definitely shows only an extremely fragmentary picture.

Erin: We’ve found that users of Viewshare often learn things about their collections through the different views they build – maps, timelines, galleries, facets, etc. What was the most surprising aspect of the collection you learned through Viewshare?

Mitch: I have to admit to being surprised at the geographic distribution of these pre-war libraries. Though obviously there are heavy concentrations in large cities like Berlin, there are also an enormous variety of small community libraries spread throughout Germany represented in the looted books. I didn’t get a real sense for this distribution until I saw the Viewshare map for the first time.

A cluster of Jewish libraries around Koblenz.

A cluster of Jewish libraries around Koblenz.

Erin: Your project is an interesting example of using digitized data to do cross-border humanities research. Could you talk about some of the possibilities and challenges of using a visualization and access tool like Viewshare for exchanging data and collaborating with scholars around the world?

Mitch: Thanks to what I was able to do with Viewshare I got in touch with Melanie Meyers, a librarian at the CJH, and am happy to say that the library there is working on mapping all of the albums from the Offenbach collection. The easy data structure for Viewshare has allowed me to share my data with them and I hope that it can be helpful in providing a more complete picture of pre-war libraries and book culture.

Erin: Do you have any suggestions for how Viewshare could be enhanced to meet the diverse needs of scholars?

Mitch: Though easier said than done, the greatest need for improvement I see in Viewshare is in creating a larger user and viewer base. The images I use for my Viewshare collection are hosted via Flickr which has much less structured data functionality but has a built-in user community and search engine visibility. In short, I’d love to see Viewshare get all the publicity it can!

Categories: Planet DigiPres

Interview with a SCAPEr - Zeynep Pehlivan

Open Planets Foundation Blogs - 13 December 2013 - 9:00am

Who are you?

My name is Zeynep PEHLIVAN.  I joined University Pierre and Marie Curie (UPMC) for a master degree at 2009. I recently received my PhD at the same university. I have been involved in the SCAPE project since September 2012.

Tell us a bit about your role in SCAPE and what SCAPE work you are involved in right now?

I ensure the work package lead position for the Quality Assurance Components work package within the Preservation Components sub project under the supervision of Stéphane Gançarski and Matthieu Cord. In coming months, I will be also involved in the development of Quality Assurance tools for UPMC.

Why is your organisation involved in SCAPE?

Our team at UPMC has been conducting research on digital preservation, especially on web archiving, since a while. As a university, participating in this project allows us to better evaluate the users’ real needs, to see how our research results are used in real life and to collaborate with different institutions.

What are the biggest challenges in SCAPE as you see it?

I think, due to its size and its international position, a project like SCAPE will have several administrative challenges. However, above all, the most important challenges for me are the technical ones. There are so many useful tools developed in the project answering different issues related to digital preservation in different development environments. Integration of these tools into one single system is a big challenge but I think, today, through the last year of the project, we see the light at the end of the tunnel.

In addition, digital objects are ephemeral depending on different reasons. Taking this ephemeral nature into account while designing our solutions is another challenge. Although it is well studied in the project, we can not predict all issues based on ephemerality for the durability of the system.

What do you think will be the most valuable outcome of SCAPE?

The size of the digital collections is getting larger each day. Thus, when we talk about digital collections, in fact, we refer to “big data”. As indicated also in the name of the project, scalability will be the most valuable outcome of the project, in my opinion.

Digital collections represent a huge information source. If access to these collections is not provided, unfortunately the preservation effort can ultimately become irrelevant. Previous works show that users of digital collections need to analyze, compare and evaluate the information. It will be interesting to see developed access tools to let users search, evaluate, and visualize these huge collections.

Contact information

Zeynep PEHLIVAN

University Pierre and Marie Curie

4 Place Jussieu, 75005 Boite 169

Zeynep.pehlivan@lip6.fr

Linkedin: http://www.linkedin.com/profile/view?id=7183444

Preservation Topics: SCAPE
Categories: Planet DigiPres

Can I Get a Sample of That? Digital File Format Samples and Test Sets

The Signal: Digital Preservation - 12 December 2013 - 3:36pm
These are my kind of samples! Photo of chocolate mayo cake samples by Matt DeTurck on Flickr

These are my kind of samples! Photo of chocolate mayo cake samples by Matt DeTurck on Flickr

If you’ve ever been to a warehouse store on a weekend afternoon, you’ve experienced the power of the sample. In the retail world, samples are an important tool to influence potential new customers who don’t want to invest in an unknown entity. I certainly didn’t start the day with lobster dip on my shopping list but it was in my cart after I picked up and enjoyed a bite-sized taste. It was the sample that proved to me that the product met my requirements (admittedly, I have few requirements for snack foods) and fit well within my existing and planned implementation infrastructure (admittedly, not a lot of thought goes into my meal-planning) so the product was worth my investment. I tried it, it worked for me and fit my budget so I bought it.

Of course, samples have significant impact far beyond the refrigerated section of warehouse stores. In the world of digital file formats, there are several areas of work where sample files and curated groups of sample files, which I call test sets, can be valuable.

The spectrum of sample files

Sample files are not all created equal. Some are created as perfect ideal example of the archetypal golden file, some might have suspected or confirmed errors of varying degrees while still others are engineered to be non-conforming or just plain bad.  Is it always an ideal “golden” everything-works-perfectly example or do less-than-perfect files have a place? I’d argue that you need both. It’s always good to have a valid and well-formed sample but you often learn more from non-conforming files because they can highlight points of failure or other issues.

Oliver Morgan of MetaGlue, Inc., an expert consultant working with the Federal Agencies Digitization Guidelines Initiative AV Working Group on the MXF AS-07 application specification has developed the “Index of Metals” scale for sample files created specifically for testing purposes during the specification drafting process which range from gold (engineered to be good/perfect) to plutonium (engineered poisonous).

An Index of Metals demonstrating a possible range of sample file qualities from gold (perfect) to plutonium (poisonous). Slide courtesy of Oliver Morgan, MetaGlue, Inc.

An Index of Metals demonstrating a possible range of sample file qualities from gold (perfect) to plutonium (poisonous on purpose). Slide courtesy of Oliver Morgan, MetaGlue, Inc.

Ideally, the file creator would have the capability and knowledge to make files that conform to specific requirements so they know what’s good, bad and ugly about each engineered sample. Perhaps equally as important as the file itself is the accompanying documentation which describes the goal and attributes of the sample. Some examples of this type of test set are the Adobe Acrobat Engineering PDF Test Suites and Apple’s Quicktime Sample Files.

Of course, not all sample files are planned out and engineered to meet specific requirements. More commonly, files are harvested from available data sets, web sites or collections and repurposed as de facto digital file format sample files. One example of this type of sample set is Open Planet’s Format Corpus. These files can be useful for a range of purposes. Viewed in the aggregate, these ad hoc sample files can help establish patterns and map out structures for format identification and characterization when format documentation or engineered samples are either deficient or lacking. Conversely, these non-engineered test sets can be problematic especially when they deviate from the format specification standard. How divergent from the standard is too divergent before the file is considered fatally flawed or even another file format?

Audiences for sample files

In the case of specification drafting, engineered sample files can be useful not only as part of a feedback loop for the specification authors to highlight potential problems and omissions in the technical language, but sample files may be valuable later on to manufactures and open-source developers who want to build tools that can interact with the file type to produce valid results.

At the Library of Congress, we sometimes examine sample files when working on the Sustainability of Digital Formats website so we can see with our own eyes how the file is put together. Reading specification documentation (which, when it exists, isn’t always as comprehensive as one might wish) is one thing but actually seeing a file through a hex viewer or other investigative tool is another. The sample file can clarify and augment our understanding of the format’s structure and behavior.

Other efforts focusing on format identification and characterization issues, such as JHOVE and JHOVE2, the National Archives UK’s DROID,  OPF’s Digital Preservation and Data Curation Requirements and Solutions and Archive Team’s Let’s Solve the File Format Problem, have a critical need for format samples, especially when other documentation about the format is incomplete or just plain doesn’t exist. Sample files, especially engineered test sets, can help efforts such as NARA’s Applied Research and their partners establish patterns and rules, including identifying magic numbers which are an essential component to digital preservation research and workflows. Format registries like PRONOM and UDFR rely on the results of this research to support digital preservation services.

Finally, there are the institutional and individual end users who might want to implement the file type in their workflows or adopt it as a product but first, they want to play with it a bit. Sample files can help potential implementers understand how a file type might fit into existing workflows and equipment, how it might compare on an information storage level with other file format options as well as help assess the learning curve for staff to understand the file’s structure and behavior? Adopting a new file format is no small decision for most institutions so the sample files allow technologists to evaluate if a particular format meets their needs and estimate the level of investment.

Categories: Planet DigiPres

Crossing the River: An Interview With W. Walker Sampson of the Mississippi Department of Archives and History

The Signal: Digital Preservation - 9 December 2013 - 3:02pm

W. Walker Sampson, Electronic Records Analyst, Mississippi Department of Archives and History

The following is a guest post by Jefferson Bailey, Strategic Initiatives Manager at Metropolitan New York Library Council, National Digital Stewardship Alliance Innovation Working Group co-chair and a former Fellow in the Library of Congress’s Office of Strategic Initiatives.

Regular readers of The Signal will no doubt be familiar with the Levels of Digital Preservation project of the NDSA. A number of posts have described the development and evolution of the Levels themselves as well as some early use cases. While the blog posts have generated excellent feedback in the comments, the Levels team has also been excited to see a number of recent conference presentations that described the Levels in use by archivists and other practitioners working to preserve digital materials. To explore some of the local, from the trenches narratives of those working to develop digital preservation policies, resources and processes, we will be interviewing some of the folks currently using the Levels in their day-to-day work. If you are using the Levels within your organization and are interested in chatting about it, feel free to contact us via our email addresses  listed on the project page linked above.

In this interview, we are excited to talk with W. Walker Sampson, Electronic Records Analyst, Mississippi Department of Archives and History.

JB: Hi Walker. First off, tell us about your role at the Mississippi Department of Archives and History and your day-to-day activities within the organization.

WS: I’m officially an ‘electronic records analyst’ in our Government Records section. It’s a new position at the archives so my responsibilities can vary a bit. While I deliver electronic records management training to government employees, I do most of my work in and with the Electronic Archives group here. This ranges from electronic records processing to a number of digital initiatives – Flickr, Archive-It and I think most importantly a reconsideration of our digital repository structure.

JB: What are some of the unique challenges to working on digital preservation within a state agency, especially one that “collects, preserves and provides access to the archival resources of the state, administers museums and historic sites and oversees statewide programs for historic preservation, government records management and publications”? That is a diverse set of responsibilities!

WS: It is! Fortunately for us, those duties are allocated to different divisions within the department. Most of the digital preservation responsibilities are directed to the Archives and Records Services division.

The main challenges here are twofold: a large number of records creators – over two hundred state agencies and committees, and following that, a potentially voluminous amount of born-digital records to process and maintain. I suppose however that this latter challenge may not be a unique to state archives.

I would also say that governance is a perennial issue for us, as it may be for a number of state archives. That is, it can be difficult to establish oversight for any state organization’s records at any given point of the life cycle. According to our state code we have a mandate to protect and preserve, but this does not translate into clear actions that we can take to exercise oversight.

JB: How have MDAH’s practices and workflows evolved as the amount of digital materials it collects and preserves has increased?

WS: MDAH is interesting because we started an electronic archives section relatively early, in 1996. We were able to build up a lot of the expertise in house to process electronic records through custom databases, scripts and web pages. This initiative was put together before I began working here, but one of my professors in the School of Information at UT Austin, Dr. Patricia Galloway, was a big part of that first step.

Since then the digital preservation tool or application ‘ecosystem’ has expanded tremendously. There’s an actual community with stories, initiatives, projects and histories. However, we mostly do our work with the same strategy as we began – custom code, scripts and pages. It has been difficult to find a good time to cross the river and use more community-based tools and workflows. We have an immense amount of material that would need to be moved into any new system, and one can find different strata of description and metadata formatting practices over time.

I think that crossing will help us handle the increasing volume, but I also think this big leap into a community-based software (Archivematica, DSpace and so on) will give us an opportunity to reconsider how digital records processing and management happens.

JB: Having seen your presentation at the SAA 2013 conference during the Digital Preservation in State and Territorial Archives: Current State and Prospects for Improvement panel, I was very interested in your discussion of using the Level of Digital Preservation as part of a more comprehensive self-assessment tool. Tell us both about your overall presentation and about your use of the Levels.

WS: I should start by just covering briefly the Digital Preservation Capability Maturity Model. This is a digital preservation model developed by Lori Ashley and Charles Dollar, and it is designed to be a comprehensive assessment of a digital repository. The intention is to analyze a repository by its constituent parts, with organizations then investigating each part in turn to understand where their processes and policies should be improved. It is up to the particular organization to prioritize what aspects are most relevant or critical to them.

The Council of State Archivists developed a survey based off this model, and all state and territorial archives took that survey in 2011. The intention here was to try and get an accurate picture of where preservation of authentic digital records stands across the country’s state archives.

This brings us to the SAA 2013 presentation. I presented MDAH’s background and follow-up to this survey along with two other state archives, Alabama and Wyoming. In my portion I highlighted two areas for improvement for us here in Mississippi, the first being policy and the second technical capacity.

Although the Levels of Digital Preservation are meant to advise on the actual practice of preservation, we have looked at the chart as a way to articulate policy. The primary reason for this is because the chart really helps to clarify at least some of what we are protecting against. That helps communicate why a body like the legislature ought to have a stake in us.

For example, when I look across the Storage and Geographic Location row of the chart, I’m closer to communicating what we should say in a storage section of a larger digital preservation policy. It’s easier for me to move from “MDAH will create backup copies of preserved digital content” to “MDAH will ensure the strategic backup of digital content which can protect against internal, external and environmental threats,” or something to that effect.

Second, I think the chart can help build internal consensus on what our preservation goals are, and what the basic preservation actions should be, independent of any specific technology. Those are important prerequisites to a policy.

Last, and I think this goes along with my second point, I don’t think policies come out of nowhere. In other words, while it strikes me that some part of a policy should be aspirational, for the most part we want to deliver on our stated policy goals. The chart has helped to clarify what we can and can’t do at this point.

JB: Using the Levels within a larger preservation assessment model is an interesting use case. What specific areas of the DPCMM did the Levels help address? The DPCMM is a much more extensive model and focuses more on self-assessment and ranking, whereas the Levels establish accepted practices at numerous degrees. What were the benefits or drawbacks of using these two documents together?

WS: Besides helping to demonstrate some policy goals, I think the Levels apply most directly to objectives in Digital Preservation Strategy, Ingest, Integrity and Security. There’s some significant overlap in content there, in terms of fixity checks, storage redundancy, metadata and file playback. When you look at the actual survey (a copy of this online somewhere…?), they recommend generally similar actions. I think that’s a good indication of consensus in the digital preservation community, and that these two resources are on target.

While I don’t think there’s a marked drawback to using the two documents together – I’ve haven’t spotted any substantive differences in their preservation advice where their subject areas overlap – one does have to keep in mind the more narrowed scope of the Levels. In addition, the DPCMM has the OAIS framework as one of its touchstones, so you find ample reference to SIPs, DIPs, AIPs, designated communities and other OAIS concepts. The Levels of Digital Preservation are not going to explicitly address those expectations.

JB: One aspect of the Levels that has been well received is the functional independence of the boxes/blocks. An individual or institution can currently be at different levels in different activity areas of the grid. I would be interested to hear how this aspect helped (or hindered) the document’s use in policy development specifically.

WS: I think it’s been very helpful in formulating policy. The functional independence of the levels lets the chart identify more preservation actions than it might otherwise. While some of those actions won’t ever be specifically articulated in a policy, some certainly will.

For example the second level of the File Formats category – “Inventory of file formats in use” – is probably not going to be expressed in a policy, levels 3 and 4 may though. It isn’t necessarily the case though that higher levels correlate to policy material however. For instance level 1 for Information Security is really more applicable to a policy statement than the level 4 action.

JB: One of the goals of the Levels of Preservation project is to keeps its guidance clear and concise, while remaining sensitive to the varied institutional contexts in which the guidance might be used. I would be interested to hear how this feature informed the self-assessment process.

WS: Similar to the functional independence, I think it’s a great feature. The Levels don’t present a monolithic single-course track to preservation capacity, so it doesn’t have to be dismissed entirely in the case that some actions don’t really apply. That said, I felt like really all the actions applied to us quite well, so I think we’re well within the target audience for the document.

The DPCMM really shares this feature. Although it’s meant to help an institution build to trustworthy repository status, it’s not a linear recommendation where an organization is expected to from one component section to the next. The roadmap would change considerably from one institution to the next.

Categories: Planet DigiPres

December Issue of Digital Preservation Newsletter Now Available

The Signal: Digital Preservation - 6 December 2013 - 5:22pm

The December 2013 issue of the Library of Congress Digital Preservation newsletter (pdf) is now available!

In this issue:Issue image

  • Beyond the Scanned Image:  Scholarly Uses of Digital Collections
  • Ten Tips to Preserve Holiday Digital Memories
  • Anatomy of a Web Archive
  • Updates on FADGI: Still Image and Audio Visual
  • Guitar, Bass, Drums, Metadata
  • Upcoming events:  CNI meeting, Dec 9-10; NDSA Regional meeting, Jan 23-24; Ala Midwinter, Jan 24-28; CurateGear, Jan 8; IDCC, Feb 24-27.
  • Conference report on Best Practices Exchange
  • Insights Interview with Brian Schmidt
  • Articles on personal digital archiving, residency program, and more

To subscribe to the newsletter, sign up here.

 

Categories: Planet DigiPres

Impressions of the ‘Hadoop-driven digital preservation Hackathon’ in Vienna

Open Planets Foundation Blogs - 6 December 2013 - 4:30pm

More than 20 developers visited the ‘Hadoop-driven digital preservation Hackathon’ in Vienna which took place in the baroque room called "Oratorium" of the Austrian National Library from 2nd to 4th of December 2013. It was really exciting to hear people vividly talking about Hadoop, Pig, Hive, HBase followed by silent phases of concentrated coding accompanied by the background noise of mouse clicks and keyboard typing.

In #scapeproject we always have a high ceiling #hadoop4DP #fb pic.twitter.com/mVRsBfArFC

— Per Møldrup-Dalum (@perdalum) 2. Dezember 2013

There were Hadoop newbies, people from the SCAPE Project with some knowledge about Apache Hadoop related technologies, and, finally, Jimmy Linn who works currently as an associate professor at the University of Maryland and who was employed as chief data scientist at Twitter before. There is no doubt that his profound knowledge of using Hadoop in an ‘industrial’ big data context was that certain something of this event.

The topic of this Hackathon was large-scale digital preservation in the web archiving and digital books quality assurance domains.  People from the Austrian National Library presented application scenarios and challenges and introduced the sample data which was provided for both areas on a virtual machine together with a pseudo-distributed Hadoop-installation and some other useful tools from the Apache Hadoop ecosystem.

I am sure that Jimmy’s talk about Hadoop was the reason why so many participants became curious about Apache Pig, a powerful tool which was humorously characterised by Jimmy as the tool for lazy pigs aiming for hassle-free MapReduce. Jimmy gave a live demo running some pig scripts on the cluster at his university explaining how pig can be used to find out which links point to each web page in a web archive data sample from the Library of Congress. Asking Jimmy about his opinion on Pig and Hive as two alternatives for data science to choose from, I found it interesting that he did not seem to have a strong preference for Pig. If an organisation has a lot of experienced SQL experts, he said, Hive is a very good choice. On the other hand, from the perspective of the data scientist, Pig offers a more flexible, procedural approach for manipulating data and to do data analysis.

Towards the end of the last day we started to split into several groups. People gathered ideas in a brainstorming session which at the end led to several groups:

·  Cropping error detection

·  Full-text search on top of warcbase

·  Hadoop-based Identification and Characterisation

·  OCR Quality

·  PIG User Defined Functions to operate on extracted web content

·  PIG User Defined Functions to operate on METS

Many participants started Pig scripting during the event, so it is clear that one cannot expect code that is ready to be used in a production environment, but we can see many points to start from when we do the planning of projects with similar requirements.

On the second day, there was another talk by Jimmy about HBase and his project WarcBase which looks like a very promising approach of providing scalable infrastructure using HBase with a very responsive user interface that offers basic functionality of what the WayBack machine offers for rendering ARC and WARC web archive container files. In my opinion, the upside of his talk was to see HBase as tremendously powerful database on top of Hadoop’s distributed file system (HDFS), Jimmy brimming over with ideas about possible use cases for scalable content delivery using HBase.  The downside was to hear his experiences about how complex the administration of a large HBase cluster can actually become.  First, additionally to the Hadoop administration, there are additional daemons (ZooKeeper, RegionServer) whichwe must keep up running, and he explained how the need for compacting data stored in HFiles, once you believe that the HBase cluster is well balanced, can lead to what the community calls “compaction storm” ending up by blowing-up your cluster which luckily only manifests itself with endless java stack-traces.

One group provided a full text search for WarcBase and they picked up the core ideas from the developer groups and presentations to build a cutting-edge environment where the web archive content was indexed by the Terrier search engine and the index was enriched with metadata from the Apache Tika mime-type and language detection. There were two ways to add metadata to the index. The first option was to run a pre-processing step that uses Pig user defined function to output the metadata of each document. The second option was to use Apache Tika during indexing to detect both the MimeType and language. In my view, this group has won the price of the fanciest set-up, sharing resources and daemons running on their laptops.         

I must say, that I was especially happy about the largest working group where outcomes were dynamically shared between developers: One implemented a Pig user defined function (UDF) making use of Apache Tika’s language detection API (see section MIME type detection) which the next developer used in a Pig script for mime type and language detection. Also Alan Akbik, SCAPE project member, computer linguist and Hadoop researcher from the University of Berlin, was reusing building blocks from this group to develop Pig Scripts for old German language analysis using dictionaries as a means to determine the quality of noisy OCRed text. As an experienced Pig scripter he produced impressive results and deservedly won the Hackathon’s competition for the best presentation of outcomes.

The last group was experimenting with functionality of classical digital preservation tools for file format characterisation and identification, like Apache Tika, Droid, and Unix file, and looking into ways to improve the performance on the Hadoop platform. It’s worth highlighting that digital preservation guru Carl Wilson found a way to replace the command line invocation of unix file in FITS by a Java API invocation which proved to be ways more efficient.

Finally, Roman Graf, researcher and software developer from the Austrian Institute of Technology, took images from the Austrian Books Online project in order to develop python scripts which can be used to detect page cropping errors and which were especially designed to run on a Hadoop platform.

On the last day, we had a panel session with people talking about experiences regarding the day-to-day work with Hadoop clusters and the plans that they have for the future of their cluster infrastructure.

Panel session, sharing experiences -adventures in implementing #Hadoop4DP #SCAPEProject pic.twitter.com/Mjssc1pSSy

— OPF (@openplanets) 4. Dezember 2013

I really enjoyed these three days and I was impressed by the knowledge and ideas that people brought to this event.

Preservation Topics: Preservation ActionsIdentificationCharacterisationWeb ArchivingToolsSCAPE
Categories: Planet DigiPres

Content Matters Interview: The Montana State Library, Part Two

The Signal: Digital Preservation - 6 December 2013 - 3:52pm
 Patty Ceglio

Diane Papineau. Photo credit: Patty Ceglio

This is part two of the Content Matters interview series interview with Diane Papineau, a geographic information systems analyst at the Montana State Library.

Part one was yesterday, December 5, 2013.

Butch: What are some of the biggest digital preservation and stewardship challenges you face at the Montana State Library?

Diane: The two biggest challenges seem to be developing the inventory system and appraising and documenting 25 years of clearinghouse data. MSL is developing the GIS inventory system in-house—we are fortunate that our IT department employs a database administrator and a web developer tasked with this work. The system is in development now and its design is challenging. The system will record not just our archived data, but the Dissemination Information Packages created to serve that data (zipped files, web map services, map applications, etc.) and the relationships between them. For data records alone, we’re wrestling with how to accommodate 13 use cases (data forms and situations), including accommodating parent/child relationships between records. Add to this that we are anxious to be up and running with a sustainable system and the corresponding data discovery tools as we simultaneously appraise and document the clearinghouse data before archiving.

We have archiving procedures in place for the frequently-changing datasets we produce (framework data). However, the existing large collection of clearinghouse data presents a greater challenge. We’re currently organizing clearinghouse data that is actively served and data that’s been squirreled away on external drives, staff hard drives, and even CDs. Much of the data is copies or “near copies” and many original datasets do not have metadata. We need to review the data and document it and for the copies, decide which to archive and which to discard.

When I think of the work ahead of us, I’m reminded of something I read in the GeoMAPP materials. The single most important thing GIS organizations can do to start the preservation process is to organize what they have and document it.

Screen shot of the Montana State Library GIS Archive.

Screen shot of the Montana State Library GIS Archive.

Butch: How have the technologies of digital mapping changed over the past five years? How have those changes affected the work you do?

Diane: The influence of the internet is important to note. Web programmers and lay people are now creating applications and maps using live map services that we make available for important datasets. These are online, live connections to select map data, making mapping possible for people who are not desktop GIS users. With online map makers accessing only a subset of our data (the data provided in these services), we note that they may not make use of the full complement of data we offer. Also, we notice that our patrons are more comfortable these days working with spatial databases, not just shapefiles. This represents a change in patron download data selection, but it would not affect our data and map protocols.

Technology gaining popularity that may assist our data management and archiving include scripting tools like Python. We anticipate that these tools will help us automate our workflow when creating DIPs, generating checksums, and ingesting data into the archive.

Butch: At NDIIPP we’ve started to think more about “access” as a driver for the preservation of digital materials. To what extent do preservation considerations come into play with the work that you do? How does the provision of enhanced access support the long-term preservation of digital geospatial information?

Diane: MSL is in the process of digitizing its state publications holdings. Providing easier public access to them was a strong driver for this effort. Web statistics indicated that once digitized, patron access to a document can go up dramatically.

Regarding our digital geographic data, we have a long history of providing online access to this data. Our current efforts to gain physical and intellectual control over these holdings will reveal long-lost and superseded data that we’ll be anxious to make available given our mandate to provide permanent public access. It may be true that patron access to all of our inventoried holdings may result in more support for our GIS programs, but we’ll be preserving the materials and providing public access regardless.

Butch: How widespread is an awareness of digital stewardship and preservation issues in the part of the geographic community in which the Montana State library operates?

Montana State Library GIS staff. Photo courtesy Montana State Library.

Montana State Library GIS staff. Photo courtesy Montana State Library.

Diane: MSL belongs to a network of professionals who understand and value GIS data archiving and who can be relied on to support our efforts with GIS data preservation. That said, these supportive state agencies and local governments may be in a different position with regard to accomplishing their own data preservation. They are likely wrestling with not having the financial and staff resources or perhaps the policies and administrative level support for implementing data preservation in their own organizations. It’s also quite likely that their business needs are focused on today’s issues. Accommodating a later need for data may be seen as less important. The Montana Land Information Advisory Council offers a grant for applicants wanting to write their metadata and archive their data. To date there have been no applicants.

Beyond Montana, I’ve delivered a GIS data preservation talk at two GIS conferences in New England this year. The information was well-received and engagement in these sessions was encouraging. Two New England GIS leaders with similar state data responsibilities showed interest in how Montana implemented archiving based on GeoMAPP best practices.

Butch: Any final thoughts about the general challenges of handling digital materials within archival collections?

Diane: By comparison to the technical hurdles a GIS shop navigates every day, the protocols for preserving GIS data are pretty straight forward. Either the GIS shop packages and archives the data in house or the shop partners with an official archiving agency in their state. For GIS organizations, libraries, and archives interested in GIS data preservation, there are many guiding documents available. Start exploring these materials using the NDSA’s draft Geospatial Data Archiving Quick Reference document (pdf).

Categories: Planet DigiPres

A Sustainable Future for FITS

Open Planets Foundation Blogs - 6 December 2013 - 11:37am
As Paul mentioned here, FITS is a classic case of a great digital preservation tool that many of us use and benefit from but that wasn’t set up to accept community code contributions. Different versions of FITS were proliferating instead of dovetailing into a better product. For this reason we decided to take a look at the situation to see what we could do to change it. First we looked at the current FITS codebase and all of the forks out there with the aim of merging all existing stable features and patches. While merging appears a rather trivial task, ensuring that the existing functionality is not broken afterwards, isn’t. This is especially tricky when there aren’t (m)any unit tests. Writing unit tests post-factum usually involves refactoring code for testability. As any seasoned developer out there will likely agree - refactoring a large code base without unit tests usually means one thing: bugs… So how do you verify, with a relatively high level of confidence, that the code base still works as expected following the merge? Blackbox testing and git-bisect to the rescue! In order to circumvent this in the limited time we had available for the FITS Blitz we decided to use blackbox testing. We created a FITS XML comparator, which compares the output files produced by different FITS versions. We also created an accompanying script that combines this comparison tool with git-bisect. For those of you who don’t know git-bisect, it's a tool that is able to pinpoint a specific commit within a git repository that introduced a problem. This is done with the help of a simple binary search and a test suite - in our case the FITS XML comparator. We were able to go through the different branches and take the ones that didn’t break functionality, but leave the ones that still needed more work. After a result of all this merging during the FITS Blitz, the next version of FITS will include:
  • A few minor performance optimisations
  • The possibility to run FITS in a nailgun server
  • Droid updated to version 6
  • Apache Tika enhancements
  • Numerous bug fixes
  • Better error reporting
  • Logging
And the best thing is: these are all community improvements! Unfortunately, not all of the contributors have dared to hit the Pull Request Button on github and that is something we have to improve as a community. In any case, having this simple way of validating that nothing major is broken has another advantage. We can now set up a continuous integration infrastructure that will help FITS maintainers to get further insight into future patches before merging them. Note, that this doesn’t mean that no unit tests should be written. Quite the opposite, creating a unit test suite and refactoring the core of FITS where necessary is the next logical step. From this foundation, made possible with a Jisc-funded SPRUCE award, we will now work in partnership with interested members of the community to develop and maintain FITS in a way that we hope will give its users much greater belief in its reliability and ability to accept code contributions. To that end we're in the process of establishing a Steering Group that will meet regularly to review the status of FITS, manage a more sustainable development process, develop and champion community contributions to FITS, and create a development roadmap for the toolset. The Group will be composed of a variety of experienced FITS developers and users, and we'll be aiming to be as inclusive as possible within (in particular) the developer community. So how will all this work in practice? When we've added the finishing touches to this phase of the work, Carl will be back to blog about the new development process and how you can get involved to make FITS better. We are in the process of setting up a new website for FITS to centralize (and improve!) the FITS documentation. Our ultimate aim is to make FITS a community-maintained tool that is kept up to date with a reliable build at everyone's fingertips, and hopefully demonstrate a better way to sustain community-created preservation tools. Petar Petrov, Carl Wilson, Andrea Goethals, Spencer McEwen and Paul Wheatley
Categories: Planet DigiPres

Content Matters Interview: The Montana State Library, Part One

The Signal: Digital Preservation - 5 December 2013 - 7:50pm
DianePapineau2

Diane Papineau. Photo credit: Patty Ceglio

In this installment of the Content Matters interview series of the National Digital Stewardship Alliance Content Working Group we’re featuring an interview with Diane Papineau, a geographic information systems analyst at the Montana State Library.

Diane was kind enough to answer questions, in consultation with other MSL staff and the state librarian, Jennie Stapp, about the MSL’s collecting mission, especially in regards to their geospatial data collections.

This is part one of a two part interview. The second part will appear tomorrow, Friday December 6, 2013.

Butch: Montana is a little unusual in that the geospatial services division of the state falls under the Montana State Library. How did this come about and what are the advantages of having it set up this way.

Diane: In addition to a traditional role of supporting public libraries and collecting state publications, the Montana State Library (MSL) hosts the Natural Resource Information System (NRIS), which is staffed by GIS Analysts.

NRIS was established by the Montana Legislature in 1983 to catalog the natural resource and water information holdings of Montana state agencies. In 1987, NRIS gained momentum (and funding) from the federal Environmental Protection Agency and Montana Department of Health and Environmental Sciences to support their mining clean-up work on the Superfund sites along the Clark Fork River between Butte and Missoula. This project generated a wealth of GIS data such as work area boundaries, contaminated area locations, and soil sampling sites, which NRIS used to make a multitude of maps for reports and project management. Storing the data and resulting maps at MSL made sense because it is a library and therefore a non-regulatory, neutral agency. Making the maps and data available via a library democratized a large collection of timely and important geographic information and minimized duplication of effort.

GIS was first employed at NRIS in 1987; from that point forward, NRIS functioned as the state’s GIS data clearinghouse, generating and collecting GIS data. NRIS operated for a decade essentially as a GIS service bureau for state government; during this period, NRIS grew into a comprehensive GIS facility, unique among state libraries. In fact, in the mid-1990s, NRIS participated in the first national effort to provide automated search and retrieval of map data. Today, beyond data clearinghouse activities, MSL is involved with state GIS Coordination as well as GIS leadership and education. We also are involved with data creation or maintenance for 10 of the 15 framework datasets (cadastral, transportation, hydrography, etc.) for Montana, and also host a GIS data archive, thanks to our participation as a full partner in the Geospatial Multistate Archive and Preservation Partnership (GeoMAPP)—a project of the National Digital Stewardship Alliance (NDSA).

Butch: Give us an example of some of the Montana State Library digital collections. Any particularly interesting digital mapping collections?

Diane: Our most important digital geographic collection is the full collection of GIS clearinghouse data gathered over the past 25 years. The majority of this data is “born digital” content made available for download and other types of access via our Data List. Within that collection, one of our most sought-after datasets is the Montana Cadastral framework—a statewide dataset of private land ownership illustrated by tax parcel boundaries. The dataset is updated monthly and is offered for download and as a web map service for desktop GIS users and online mapping. We have stored periodic snapshots of this dataset as it has changed through time and we also serve the most recent version of the data via the online Montana Cadastral map application. The map application makes this very popular data accessible to those without desktop GIS software or training in GIS. Another collection to note is our Clark Fork River superfund site data, which may prove invaluable at some point in the future.

In terms of an actual digital map series, our Water Supply/Drought maps come to mind. For at least 10 years now, NRIS has partnered with the Montana Department of Natural Resources and Conservation (DNRC) to create statewide maps illustrating the soil moisture conditions in Montana by county. DNRC supplies the data; NRIS creates the map and maintains the website that serves the collection of maps through time.

Butch: Tell us a bit about how the collection is being (or might be) used. To what extent is it for the general public? To what extent is it for scholars and researchers?

Diane: Our GIS data collection serves the GIS community in Montana and beyond. Users could be GIS practitioners working on land management issues or city/county planning for example. Other collections, such as our land use and land cover datasets and our collection of aerial photos, may be of particular interest to researchers.  The general public also utilizes this data; because of phone inquiries we receive, we know that hunters, for example, frequently access the cadastral data in order to obtain landowner permission to hunt on private lands. Though we don’t track individual users due to requirements of library confidentiality, we know that the uses for this collection are virtually limitless.

The general public can access much of the geographic data we serve by using our online mapping applications. For example, patrons can use the Montana Cadastral application that I mentioned plus tools like our Digital Atlas to see GIS datasets for their area of interest. They can use our Topofinder to view topographic maps online or to find a place when, for example, all that’s known is the location’s latitude and longitude. In 2008, in partnership with the Montana Historical Society, we published the Montana Place Names Companion—an online map application that helps patrons to learn the name origin and history of places across the state.

Butch: What sparked the Montana State Library to join the National Digital Stewardship Alliance?

Diane: While we’ve played host to this large collection of GIS data and we have long been recognized as the informal GIS data archive for the state, we had yet to maintain an inventory of our holdings. Thankfully, we never threw data out.

We realized that in order to gain physical and intellectual control over this collection of current and superseded data, we needed to modernize our approach. The timing couldn’t have been better because it coincided with the concluding phase of GeoMAPP.  In 2010 MSL participated as an Information Partner, beginning our exposure to formal GIS data archiving issues. Then in 2011, MSL joined GeoMAPP as the project’s last Full Partner. This partnership permitted us to envision applying archivists’ best practices while we reworked and modernized our data management processes.

In some ways we were the GeoMAPP “guinea pig” and we are grateful for that role—so much research had already been done by the other partners and so much information was already available. In return, what MSL could offer to this group was the perspective of three important GeoMAPP target audiences: libraries, archives, and GIS shops.

Butch: Tell us about some of the archiving practices that the Montana State Library has defined as a result of its partnership with GeoMAPP and the National Digital Stewardship Alliance. Why is preservation important for GIS data?

Diane: I’ll start with the “why.” GIS data creation is expensive. By preserving geographic data via archiving, we store that investment of time and money. GIS data is often used to create public policy. Montana has incredibly strong “right to know” laws so preserving data that was once available to decision makers supports later inquiry about current laws and policies. Furthermore, making superseded data discoverable and accessible promotes historically-informed public policy decisions, wise land use planning, and effective natural disaster planning to name just a few use cases. From a state government perspective, the published GIS datasets created by state agencies are considered state publications. Our agency is statutorily mandated to preserve state publications and make them permanently accessible to the public.

To guide us in this modernization, MSL developed data management standards, policies, and procedures that require data preservation using archivists’ best practices. I’ll discuss a few highlights from these standards that illustrate our particular organizational needs as a GIS data collector and producer.

In order to appeal to the greater GIS community in Montana, we decided to use more GIS-friendly terms in place of the three “package” terms from the OAIS model. We think of a Submission Information Package (SIP) as “working data,” a Dissemination Information Package (DIP) as a Published Data Package, and an Archive Information Packages (AIP), as an Archive Data Package.

MSL chose to take a “library collection development policy” approach to managing a GIS data collection rather than a “records management” approach, which makes use of records retention schedules. What this means is we’re on the lookout for data we want to collect—appraisal happens at the point of collection. If we take the data, we both archive it (creating an AIP) and make DIPs at the same time. The archive is just another data file repository, though a special one with its own rules. If the data acquired is not quite ready for distribution, we modify it from a SIP (our “working data”) to make it publishable. We do not archive the SIP.

DataCollectionMgtFlowWithoutAA2

Montana State Library Data Collection Management Flow

We’re employing the library discipline’s construct of series’ and collections and their associated parent/child metadata records, which is new to the GIS group here at MSL.  In turn, that decision influenced the file structure of our archive. Though ISO topic categories were GeoMAPP suggestions for both data storage as well as for data discovery, MSL chose instead to organize archive data storage by the time period of content unless the data is part of a series (i.e. cadastral) or if it was generated as part of a discrete project and is considered a collection (i.e the Superfund data). Additional consistency and structure should also come from the use of a new file naming convention (<extent><theme><timeframe>).

MSL is archiving data in its original formats rather than converting all data to an archival format (i.e. shapefile) because each data model offers useful spatial characteristics that we did not want to strip from the archived copy. For archive data packaging, we use the Library of Congress tool “Bagger” and we specifically chose to zip all the associated files together before “bagging” to save space in the archive. Zipping the data also permits us to produce one checksum for the entire package, which simplifies dataset management and dataset integrity checking in the workflow. We decided not to use Bagger’s zip function for this because the resulting AIP produced an excessively deep file structure, burying the data in multiple folder levels. To document the AIP in our data management system, we’ve established new archive metadata fields such as date archived, checksum, data format, and data format version.

Part two of this interview will appear tomorrow, Friday December, 2013.

Categories: Planet DigiPres

Happenings in the Web Archiving World

The Signal: Digital Preservation - 4 December 2013 - 6:11pm

Recently, the world of web archiving has been a busy one. Here are some quick updates:

  • The National Library of Estonia released the Estonian Web Archive to the public. This is of particular note because the Legal Deposit Law in Estonia allows the archive to be publicly accessible online. If you read Estonian you can browse the 1003 records that make up the 1.6 TB of data in the archive. A broad crawl of the entire Estonian domain is planned in 2014.
  • Ed Summers from the Library of Congress gave the keynote address at the National Digital Forum in New Zealand titled The Web as  Preservation Medium. Ed is a software developer and offers a great perspective into some technical aspects of preserving the Web. He covers the durability of HTML, the fragility of links, how preservation is interlaced with access, the importance of community action and the value of “small data”.
  • The International Internet Preservation Consortium 2014 General Assembly will be held at the Bibliothèque nationale de France in Paris May 19-13, 2014. There is still a little time to submit a proposal to speak at the public event on May 19th titled Building Modern Research Corpora: the Evolution of Web Archiving and Analytics.

IIPC_Logo_FullColorCall for Proposals announcement from the IIPC:

Libraries, archives and other heritage or scientific organizations have been systematically collecting web archives for over 15 years. Early stages of web archiving projects were mainly focused on tackling the challenges of harvesting web content, trying to capture an interlinked set of documents, and to rebuild its different layers through time. Institutions, especially those on a national level, were also defining their legal and institutional mandates. Meanwhile, approaches to web studies developed and influenced researchers’ and academics’ use of web archives. New requirements have emerged. While the objective of building generic collections remains valid, web archiving institutions and researchers also need to collaborate in order to build specific corpora – from the live web or from web archives.

At the same time, “surfing the web the way it was” is no longer the only way of accessing archived web content. Methods developed to analyse large data sets – such as data or link mining – are applicable to web archives. Web archive collections can thus be a component of major humanities and social sciences projects and infrastructures. With relevant protocols and tools for analysis, they will provide invaluable knowledge of modern societies.

This conference aims to propose a forum where researchers, librarians, archivists and other digital humanists will exchange ideas, requirements, methods and tools that can be used to collaboratively build and exploit web archive corpora and data sets. Contributions are sought that will present:

  • models of collaboration between archiving institutions and researchers,
  • methods and tools to perform data analytics on web archives,
  • examples of studies performed on web archives,
  • alternative ways of archiving web content.

Abstracts (no longer than one page) should be sent to Peter Stirling (peter dot stirling at bnf dot fr) by Friday December 9, 2013. Full details are available at the IIPC website.

Categories: Planet DigiPres

Week 48: A SCAPE Developer Short Story

Open Planets Foundation Blogs - 4 December 2013 - 10:35am

It's been two weeks since the internal SCAPE developer workshop in Brno, Czech Republic. It was a great workshop. We had a lot of presentations and demos, and were brought up to date on what's going on in the other corners of the SCAPE project. We also had some (loud) discussions, but I think we came to some good agreements on where we as developers are going next. And we started a number of development and productisation activities. I came home with a long list of things to do next week (this ended up not at all being what I did last week, but I still have the list, so next week, fingers crossed). Tasks for week 48:

  • xcorrSound
    • make versioning stable and meaningful (this I looked at together with my colleague in week 48)
    • release new version (this one we actually did)
    • finish writing nice microsite
    • tell my colleague to finish writing small website, where you can test the xcorrSound tools without installing them yourself
    • write unit tests
    • introduce automatic rpm packaging?
    • finish xcorrSound Hadoop job
    • do the xcorrSound Hadoop Testbed Experiment
      • Update the corresponding user story on the wiki
      • Write the new evaluation on the wiki
    • finish the full Audio Migration + QA Hadoop job
    • do the full Audio Migration + QA Hadoop Testbed Experiment
      • Update the corresponding user story on the wiki
      • Write the new evaluation on the wiki
    • write a number of new blog posts about xcorrsound and SCAPE testbed experiments
    • new demo of xcorrsound for the SCAPE all-staff meeting in February
  • SCAPE testbed demonstrations
    • define the demos that we at SB are going to do as part of testbed (this one we also did in week 48; the actual demos we'll make next year)
  • FITS experiment (hopefully not me, but a colleague)
  • JPylyzer experiment (hopefully not me, but a colleague)
  • Mark FFprobe experiment as not active
  • ... there are some more points for the next months, but I'll spare you...

So what did I do in week 48? Well, I sort of worked on the JPylyzer experiment, which is on the list above. In the Digital Preservation Technology Development department at SB we are currently working on a large scale digitized newspapers ingest workflow including QA. As part of this work we run JPylyzer from Hadoop on all the ingested files, and then validate a number of properties using Schematron. These properties come from the requirements to the digitization company, but in SCAPE context these properties should come from policies, so there is still some work to do for the experiment. But running JPylyzer from Hadoop, and validating properties from the JPylyzer output using Schematron now seems to work in the SB large scale digitized newspapers ingest project :-)

And for now I'll put week 50 on the above list, and when I have finished a sufficient number of bullet points I'll blog again! This post is missing links, so I hope you can read it without.

Preservation Topics: SCAPE
Categories: Planet DigiPres

Digital Preservation Pioneer: Gary Marchionini

The Signal: Digital Preservation - 3 December 2013 - 8:27pm
Gary Marchionini

Gary Marchionini. Photo by University of North Carolina at Chapel Hill.

In 1971, Gary Marchionini had an epiphany about educational technology when he found himself competing with teletype machines for his students’ attention.

Marchionini was teaching mathematics at a suburban Detroit junior high school the year that the school acquired four new teletype machines. The machines were networked to a computer, so a user could type something into a teletype and the teletype would transmit it to the computer for processing.

The school teletypes accessed “drill and practice” programs. The paper-based teletype would print a math problem, a student would type in the answer, wait patiently for the response over the slow, primitive network and eventually the teletype would print out, “Good” (if it was correct).

“The thing was noisy,” said Marchionini. “But the kids still wanted to leave my math classroom to go do this in the closet. There was something about this clickety clackety paper-based terminal that attracted them.

“Eventually I realized that there were two things going on. One was personalization; each kid was getting his own special attention. The other thing was interactivity; it was back and forth, back and forth with the kids. It was engaging.

“That’s what sparked my interest in computer interaction as a line of research.”

That interest became a lifelong mission for Marchionini. He went on to get his Masters and Doctorate in Math Education and Educational Computing from Wayne State University, he quit teaching public school in 1978, joined the faculty at Wayne State and trained teachers in computer literacy.

In 1983, Marchionini joined the faculty at the University of Maryland College of Library and Information Services; he also joined the Human-Computer Interaction Laboratory.

“It was easy to make the transition from education to library and information services because I always thought of information retrieval as a learning function,” said Marchionini. “The goal of my work was always to enhance learning. And information seeking, from a library perspective… well, people are learning. It could be casual or it could be critical but they are trying to learn something new.”

Marchionini’s research encompassed information science, library science, information retrieval, information architecture and human/computer interaction…interface research. He was especially keen on the power of graphics to help people visualize and conceptualize information, and to help people interact with computers to find that information. In fact, as early as 1979, before the explosion of graphic interfaces on personal computers, Marchionini was coding rudimentary graphic representations on his own.

“One of my projects [in 1979] involved addition ‘grouping’ and subtraction ‘regrouping’ – borrowing and carrying and all that stuff,” said Marchionini. “I wrote a computer program that graphically showed that process as a bundling and unbundling of little white dots on a Radio Shack screen.”

Marchionini is quick to point out that graphics were only a part of his interface research, and there is a time and a place for graphics and for declarative text in human/computer interaction. He said that the challenge for researchers was to determine the appropriate function of each.

One interface project that he worked on at UMd also marked his first involvement with the Library of Congress: working with UMd’s Nancy Anderson, professor of psychology (now retired), and Ben Shneiderman, professor of computer science, to add touch screens to the Scorpio and MUMS online catalog interfaces. UMd’s collaborative relationship with the Library continued on into the American Memory project.

“They contracted with us at Maryland to do a series of training events on the user-interface side of American Memory,” said Marchionini. “We did a lot of prototypes. This is some of the early dynamic-query work that Ben Shneiderman and his crew and those of us in the Human Computer Interaction lab were inventing. We worked on several of the sub-collections.”

Marchionini’s expertise is in creating the underlying data architecture and determining how the user will interact with the data; he leaves the interface design — the pretty page — to those with graphic arts talent.

A lot of analysis, thought, research and testing goes into developing appropriate visual cues and prompts to stimulate interactivity with the user. How can people navigate dense quantities of information to quickly find what they’re searching for? What kind of visual shorthand communicates effectively and what doesn’t?

When an interface is well-designed, it doesn’t call attention to itself and the user experience is smooth and seamless. Above all, a well-designed interface always answers the two questions “Where am I?” and “What are my options?”.

Regarding his work on cues and prompts, Marchionini cites another early UMd/Library of Congress online project, the Coolidge-Consumerism collection.

“We wanted to give people ‘look aheads’ and clues about what might happen and what they were getting themselves into if they click on something,” said Marchionini. “The idea was to see if we can show samples of what’s down deep in the collection right up front, either on the search page or on what was in those days the early search-and-results page. It was a lot of fun to work with Catherine Plaisant and UMd students on that. We made some good contributions to interface design.” Marchionini and Paisant delivered a paper at the Computer-Human Interaction group’s CHI 97 conference titled, “Bringing Treasures to the Surface: Iterative Design for the Library of Congress National Digital Library Program,” which details UMd’s interface design process.

Marchionini has long had an interest in video as a unique means of conveying information. Indeed, he may have recognized video’s potential long before many of his peers did.

In 1994, he and colleagues from the UMd School of Education worked on a project called the Baltimore Learning Community that created a digital library of social studies and science materials for teachers in Baltimore middle schools.

Apple donated about 50 computers. The Discovery Channel offered 100 hours of video, which Marchionini and his colleagues planned to digitize, segment, index and map to the instructional objectives of the state of Maryland. It was an ambitious project and Marchionini said that he learned a lot about interactive video, emerging video formats, video copyrights and the programming challenges for online interactivity.

“We built some pretty neat interfaces,” said Marchionini. “At the time, Java was just coming out and we were developing dynamic query interfaces in the earliest version of Java. We were moving toward web-based applets. And we were building resources for the teachers to save their lesson plans, including comments on how they used the digital assets and wrote comments on them and shared them with other teachers. Basically we were building a Facebook of those days — getting these materials shared with one another and people making comments and adding to other people’s lesson plans so they could re-use them.”

Marchionini adds that the Baltimore Learning Community project is a good example of the need for digital preservation. Today, nothing remains from the project except for some printouts of screen displays of the user interfaces and website, and a few videotapes that show the dynamics.

“Today’s funding agencies’ data-management plan requirements are a step in the right direction of ensuring preservation,” said Marchionini.

In 1998, Marchionini joined the faculty at the School of Information and Library Science at the University of North Carolina, Chapel Hill, where he continued his video research along with his other projects. In 2000, he and Barbara Wildemuth and their students launched Open Video, a repository of rights-free videos that people could download for education and research purposes. Open Video acquired about 500 videos from NASA, which Open Video segmented and indexed. Archivist and filmmaker Rick Prelinger donated many films from his library to Open Video before he allied with the Internet Archive. Open Video even donated hundreds of videos to Google Video before Google acquired YouTube.

In 2000, around the time that NDIIPP was formed, Marchionini started discussing video preservation with his colleague Helen Tibbo and others. He concluded that one of the intriguing aspects of preserving video from online would be to also capture the context in which the video existed.

Marchionini said, “What kind of context would you need, say in 2250, if you see a video of some kids putting Mentos in Coke bottles and squirting stuff up in the air? You would understand the chemistry of it and all that but you would never understand why half a million people watched that stupid video at one time in history.”

“That’s where you need the context of knowing that this was the time when YouTube was happening and people were discovering ways to make their own videos without having to have a million dollar production lab or a few thousand dollars worth of equipment. The importance of it is that the video is associated with what was going on in the world at the time.”

With NDIIPP grant money, by way of the National Science Foundation, Marchionini and his colleagues created a tool  called ContextMiner, a sort of tightly focused, specialized web harvester that is driven by queries rather than link following. A user gives ContextMiner a query or URL to direct to YouTube, Flickr, Twitter or other services. In the case of YouTube, ContextMiner then regularly downloads not only the video files returned from the search but whatever data on the page is associated with that video. A typical YouTube page will have comments, ratings and links to related videos. For awhile, ContextMiner even harvested incoming links, which placed the video in a sort of contextual constellation of related topics.

The inherent educational value of video is that it can show a process. You can either read about how to juggle or how to tie your shoe laces, or you can watch a demonstration. Modelling communicates processes more effectively than written descriptions of processes.

Marchionini also sees video as a means of recording a process for research purposes. As an example, he described a situation where he wanted to capture and review the actions of users as they conducted queries and negotiated the search process.

He said, “I wanted to see a movie of a thousand people’s searches going through these states, from query specification to results examination and back to queries. Video is a way to preserve some things that have dynamics and interactions involved, things that you just can’t preserve in words. This is critical for showing processes, such as interaction dynamics, in a rapidly changing web environment. Because old code and old websites may no longer work, video is an important tool to capture those dynamics. That’s the only way I have of going back and saying, ‘Ten years ago, here were these interfaces we were designing and here’s why they worked the way they did.’ And I show a video.”

Today Marchionini is dean of the UNC School of Information and Library Science and he heads its Interaction Design Laboratory. The results of Marchionini’s research over the years have influenced our daily human/computer interaction in ways that we’ll never know.  Interfaces will continue to evolve and get refined but it is important to remember the work of people like Marchionini who did the early research and testing, labored on the prototypes and laid the foundation of effective human-computer interface design, making it possible for modern users to interact effortlessly with their devices.

Professors may not get the glory and attention that their work deserves but that’s not the point of being a teacher. Teachers teach. They pass their knowledge along to their students and often inspire them to create the Next Big Thing.

“University professors create ideas and prototypes and then the people who get paid to build real systems do that last difficult 10% of making something work at scale,” said Marchionini. “We train students. And it’s the students that we inspire, hopefully, who go on to industry or government work or libraries. And they put these ideas into place.

“My job is ideas and directions. Some stick and others do not. I hope they all get preserved so we can learn from both the good ones and the not-so-good ones.”

——-
<<Digital Preservation Pioneer index
——-

Categories: Planet DigiPres

BitCurator’s Open Source Approach: An Interview With Cal Lee

The Signal: Digital Preservation - 2 December 2013 - 2:50pm
Cal Lee, Associate Professor at the School of Information and Library Science at the University of North Carolina at Chapel Hill

Cal Lee, Associate Professor at the School of Information and Library Science at the University of North Carolina at Chapel Hill

Open source software is playing an important role in digital stewardship. In an effort to better understand the role open source software is playing, the NDSA infrastructure working group is reaching out to folks working on a range of open source projects. Our goal is to develop a better understanding of their work and how they are thinking about the role of open source software in digital preservation in general.

For background on discussions so far, review our interviews with Bram van der Werf on Open Source Software and Digital Preservation, Peter Van Garderen and  Courtney Mumma on Archivematica and the Open Source Mindset for Digital Preservation Systems and Mark Leggott on Islandora’s Open Source Ecosystem and Digital Preservation. In this interview, we talk with Cal Lee, Associate Professor at the School of Information and Library Science at the University of North Carolina at Chapel Hill about BitCurator.

Trevor: The title of your talk about BitCurator to the NDSA infrastructure working group explained it as “An Open-Source Project for Libraries and Archives that Takes Bitstreams Seriously.” Could you unpack that a bit for us? What does it mean to take bitstreams seriously and why is it important for archives to do so?

Cal: Computers store and process information through physical mechanisms, such as turning transistors on/off and changing/detecting the magnetic properties of the surface of a disk.  However, software is designed to deal with bitstreams, which are abstractions of those physical properties into sequences of 1s and 0s.  As I’ve expressed elsewhere, the bitstream is a powerful abstraction layer, because it allows any two computer components to reliably exchange data, even if the underlying structure of their physical components is quite different. In other words, even though the bits that make up the bitstream must be manifested through physical properties of computer hardware, the bitstreams are not inextricably tied to any specific physical manifestation.  So the bitstream will be treated the same, regardless of whether it came off a hard drive, solid state drive, CD or floppy disk.

The bitstreams can be (and often are) reproduced with complete accuracy.  By using well-established mechanisms – such as generation and comparison of cryptographic hashes (e.g. MD5 or SHA1) – one can verify that two different instances of a bitstream are exactly the same. This is more fundamental than simply saying that one has made a good copy. If the two hash values are identical, then the two instances are, by definition, the same bitstream.

In our everyday use of computers, we luckily don’t need to worry about bitstreams.  We focus on higher-level representations such as documents, pages and programs.  We click on things, copy things and open things, without having to worry about their constituent parts.  But those responsible for the long-term preservation of digital information need to attend to bitstreams.  They need to ensure the integrity of bitstreams over time by generating and then periodically verifying the cryptographic hashes that I mentioned earlier.  They also often need to view files through hex editors, which are programs that allow them to see the underlying bitstreams (presented in 8-bit chunks called bytes), so they can identify file types, extract data from otherwise unreadable files, figure out the underlying contents and structures of files, and even reverse engineer formats in order to bring otherwise obsolete files back to life.

Bitstreams are also important when it comes to preserving the information acquired on removable media such as hard drives, flash drives, CDs or floppy disks.  Well-established practices in the field of digital forensics involve using a write blocker to ensure that none of the bits on the medium are accidentally changed or overwritten, and then creating a disk image.  A disk image is a perfect copy of the bitstream that is read off the disk through the computer’s input/output equipment.  It essentially allows librarians and archivists to retain all of the contents of a disk without having to rely on the physical medium.  This is important, because the medium will not be readable forever, so the bits need to be “lifted” off and placed in other storage.  It’s also important because there are many forms of data stored on the disk that may not be replicated correctly simply by copying and pasting the files from the disk.  The standard forensics software that creates a disk image also generates a cryptographic hash of the entire disk image (as opposed to the hashes of the individual files), so someone in the future can verify the disk image and ensure that none of the bits have changed.

The process for creating a disk image begins by being able to read the physical media. For example, a 3.5 inch disk like this.

The process for creating a disk image begins by being able to read the physical media. For example, a 3.5 inch disk like this.

Trevor: Disk images are an important part of that bitstream focus. At its core, BitCurator functions to help create disk images and then enable a user to carry out a range of operations on disk images. Could you tell us a bit about how your team is thinking about disk images themselves as a format? For example, to what extent is the image the artifact and the process of creating an image a preservation action? Or, conceptually is the image more of akin to a derivative of the artifact?

Cal: As I explained earlier, a bitstream is the same bitstream regardless of how it’s physically stored.  So if you navigate to a file that’s stored on your computer and send it to me as an email attachment, and I then save it to my computer, my copy of the bitstream will be exactly the same as your copy.  The associated metadata, such as the file name and timestamps could be completely different, but the file as a bitstream will not change (assuming there has been no corruption of the file along the way).  We can verify this by generating hashes on the two copies and seeing that they match.

This same set of relationships applies to disk images.  If you create a disk image of a floppy disk and send it to me, I’ll then have the exact same bitstream that you have.  If you create another disk image of that disk, it should also be exactly the same (again, assuming no data loss due to hardware failure).  It is this disk image that we need to treat as the “original” in a digital environment.  This is true for two fundamental reasons.  First, software on your computer doesn’t have access to the underlying physical properties of a disk the same way that a reader has direct access to the physical properties of a printed page.  The bitstreams that computers read, manage and process are always mediated through the computers’ input/output equipment.  So, except in extremely rare cases of heroic recovery, there’s no practical value in treating the contents of a disk as anything other than the stream of bits that can be read through the I/O equipment.  In other words, for practical purposes, the disk image is the disk.

The second reason to treat the disk image as the original is that the physical disk will not be readable forever.  The industry will abandon support for the hardware and low-level software/firmware required to read it.  The performance of the medium (its storage capacity and input/output transfer rate) will become less acceptable over time – ever try to store a terabyte of data on floppy disks?  And the bits will eventually be lost through natural physical aging.

This doesn’t mean that the artifactual properties of hardware are never important.  Understanding the original hardware can be important to knowing what the user experience was like at the time.  And taking pictures of original media in order to reflect things written on them can be a good way to reflect aspects of the creator’s intentions and work habits.

Here you see the interface for Guymager, the tool BitCurator uses to create disk images.

Here you see the interface for Guymager, the tool BitCurator uses to create disk images.

Trevor: How is the BitCurator team approaching interoperability between this tool and other digital preservation tools?

Cal: Probably the most important answer to your question is that all of the BitCurator software is distributed under an open-source license.  This means that people can download, manipulate and redistribute whatever parts they find useful.

We’re also in regular contact and collaborate with people involved in various other development activities.  For example, Courtney Mumma from Artefactual Systems is on the BitCurator Development Advisory Group, and we work closely with Artefactual to ensure that the BitCurator software and its data output are structured and packaged in ways that can be incorporated into Archivematica.  Mark Matienzo is also on the DAG, and we’ve had many discussions with him about how the BitCurator software can play well with ArchivesSpace.  Similarly, we strive to stay abreast of related software development activities being carried out within collecting institutions, such as the valuable work of Peter Chan at Stanford, Don Mennerich at the New York Public Library, Mark Matienzo at Yale, and activities outside the US that are represented well by the documentation that Paul Wheatley has developed for the Open Planets Foundation.

Kam Woods, who is the BitCurator Technical Lead, carries out extremely important liaison activities between our team and not just developers in the cultural heritage sector but also developers of standards and software in the forensics industry.  This is particularly important for BitCurator, because we’re repurposing, adapting and repackaging many existing open-source digital forensics tools.  Identifying and managing software dependencies is an ongoing process.

Viewing reports on a disk image in Bulk Extractor

Viewing reports on a disk image in Bulk Extractor

Trevor: Could you tell us a bit about the design principles at work in the BitCurator project? That is, instead of trying to build things from scratch you seem to be bringing together a lot of open source software created for somewhat different use cases and make it useful to archives. Why did your team develop this approach and what do you see as its benefits and limitations?

Cal: Almost twenty years ago, in a book called Darwin’s Dangerous Idea, Daniel Dennett argued that complex systems evolve through what he called the “accumulation of design.”  New products, services and theories and various other human products build off of existing ones.  Software development is no different.  Programmers know that it’s usually better to make use of existing code than to build it from scratch.  Why write the code required to write text to the screen, for example, if someone else has already done that?  Open-source software facilitates this process, because reusing someone else’s code doesn’t require the negotiation of permissions or payment.

Code adaptation and reuse is a particularly powerful proposition for the application of digital forensics to digital collections, because there is a great deal of powerful software that has already been developed, and it’s unlikely that collecting institutions would ever have sufficient resources to develop such tools completely on their own.  As someone who has been working with digital archives for many years, I’ve been amazed by how many tools being developed for digital forensics can be applied to the problems we face.  A great place to see leading-edge development in this space is the Digital Forensics Research Workshop, which is an annual conference that publishes its papers in a journal called Digital Investigation.  I’ve been particularly grateful for the open-source (or public domain) software developed by Simson Garfinkel at the Naval Postgraduate School and Brian Carrier of Basis Technologies.

Of course, all design decisions involve costs and benefits.  The main challenges of using software developed by others are that your specific use case may not have been the primary priority of those developers, and as I mentioned earlier, you have to stay on top of dependencies with that existing software as their (and your) software evolves over time.  The BitCurator team and I believe strongly that these costs are well worth the numerous benefits.  And we’re working to support the kinds of use cases that are most important to collecting institutions.

Visualizations of some of the file system metadata created though bulk extractor's reporting functions.

Visualizations of some of the file system metadata created though bulk extractor’s reporting functions.

Trevor: Could you tell us a bit about how you are thinking about the sustainability of BitCurator? For example, are you thinking about building a community of users and developers? What kinds of future funding streams are you looking to?

Cal: There are various elements of BitCurator that are designed to build capacity and ensure the sustainability of our activities. I’ve already explained that the software is distributed under an open source license, so diverse constituencies will be able to extend our tools at will.  Members of the BitCurator team have been offering a lot of continuing professional education opportunities (including a module for Rare Book School and classes for the Digital Archives Specialist program of the Society of American Archivists), which help to build and cultivate a community of users.  There’s a BitCurator user group that interested professionals can join, and our project wiki includes an increasing body of documentation to help people to install and use the software.

A significant focus of the second phase (October 2013 to October 2014) of BitCurator is to devise and implement a sustainability plan.  This is being overseen and coordinated by Porter Olsen, who is the Community Lead for BitCurator.  We’re currently exploring a variety of membership models.  We should have a much more detailed answer to your question in the coming year.

Trevor: Could you tell us a bit about how you are trying to engage and build a community around the software? What kinds of approaches are you taking and to what ends are you taking those approaches?

Cal: I’ve already talked about most of them within the context of sustainability.  The two issues (sustainability and community building) are closely related.  The products of the BitCurator project will ultimately be sustainable if there are professionals working in a variety of institutions who value them, use them, and contribute back to their ongoing development through evaluative feedback, bug reports and code revisions/enhancements.  In addition to our educational offerings and guidance resources, we’ve also published many papers/articles about this work and given talks at a variety of conferences and other professional events.

Porter Olsen is taking on many new engagement activities this year.  Among other things, this includes site visits and webinars.  The first two webinars that Porter is offering have filled up within a few days of announcing them, so there seems to be a lot of interest.

Trevor: It strikes me that one of the biggest opportunities and challenges here is that there is a significant literacy gap within the community around how to deal with born digital archival materials. For example, if you were making a tool to turn out finding aids there would be relatively solid requirements within the archives community of practice. In contrast, in working with born digital archival materials there is still an extensive need for developing those practices and a significant lack of knowledge about the issues at hand among many in the archives profession. First off, do you agree with this perspective? Second, if so how are you approaching designing a tool while the archives community is still simultaneously bootstrapping its way into working with?

Cal: I agree with you that the landscape is currently undergoing dramatic evolution.  This is what makes the work so fun and so fulfilling.  Professionals in a diverse range of collecting institutions are developing workflows that involve digital forensics tools and methods.  They’re learning from each other and making changes as they go along.

This is also a very exciting situation for an educator.   I don’t know if they always believe me when I tell them this, but today’s students in a program like the one at UNC SILS will be defining and establishing archival practices of the future.  If you want to continuously take on new challenges and creatively developed entirely new ways of working, then this is a great profession to join right now.  If you want a profession that’s safe and predictable, I recommend looking elsewhere.

Trevor: How has your work on BitCurator shaped your general perspective on the role that open source software can and should play in digital preservation? I would be particularly interested in any comments and connections you have to some of the interviews we have already done in this series. For reference, those include Bram van der Werf on Open Source Software and Digital Preservation, Peter Van Garderen &  Courtney Mumma on Archivematica and the Open Source Mindset for Digital Preservation Systems and Mark Leggott on Islandora’s Open Source Ecosystem and Digital Preservation.

Cal: It’s hard for me to argue with much that Bram, Peter, Courtney or Mark have said to you.  I think we are of a like mind on many things.  The curation of digital collections is a collective endeavor, and it can benefit greatly from open-source software development.  But it’s definitely not a panacea.  We have to learn from each other, assist each other, and celebrate each other’s victories.

Categories: Planet DigiPres

SPRUCE project Award: Lovebytes Media Archive Project

Open Planets Foundation Blogs - 28 November 2013 - 6:32pm

Lovebytes currently holds an archive of digital media assets representing 19 years of the organisation’s activities in the field of digital art and a rich historical record of emerging digital culture at the turn of the century. It contains original artworks in a wide variety of formats, video and audio documentation of events alongside websites and print objects.

In June 2013 we were delighted to receive an award from SPRUCE, which enabled us to devise and test a digital preservation plan for the archive through auditing, migrating and stabilising a representative sample of material, concentrating on migrating digital video and Macromedia Director files.

Alongside this we developed a Business Case, which makes the case for preserving the archive and describes the work that needs to be done to make it accessible for the benefit of current and future generations, with a view to this forming the basis of applications for funding to continue this work.

Context

Lovebytes was set up to explore the cultural and creative impact of digitalisation across the whole gamut of artistic and creative practice through a festival of exhibitions, talks, workshops, performances, film screenings and commissions of new artwork.

We wanted the festival to be a forum to pose open questions about the impact of digitalisation for artists and audiences, in an attempt to find commonalities in working practice, new themes and highlight new and emerging forms and trends in creative digital practice and also provide support for artists to disseminated and distribute their own work through commissions.

This was a groundbreaking model for a UK media festival and established Lovebytes as key player amongst a new wave of international arts festivals.

The intention in developing a plan for Lovebytes Media Archive is to look at how best to capture the 'shape' of the festival by and how to best represent this in creating an accessible version of archive.

Main Objectives

The Objectives of the project funded through SPRUCE are outlined below:

  1. Develop a workflow for the migration of the digital files and interactive content, progressing on from work done during SPRUCE Mashup London.
  2. Tackle issues around dealing with obsolete formats and authoring platforms used by artists (such as Macromedia Director Projector files) and look at ways of making this content more accessible whilst also maintaining original copies for authenticity.
  3. Research and develop systems for transcription, data extraction and the use of metadata to increase accessibility of the archive.
  4. Report on progress and share our findings for the benefit of the digital preservation community.
  5. Develop a digital preservation Business case, with a view to approaching funders.

Approach

We started by developing a research plan for a representational sample of the archive (see below), focusing on one festival, rather than a range of samples from over the 19 years. We selected the year 2000 as this included a limited edition CD Rom / Audio CD publication which contains specially commissioned interactive and generative artwork in a variety of formats.

Additional assets in the representation sample include video documentation of panel sessions, printed publicity, photographs, press cuttings and audience interviews in a wide variety of formats.

Research plan for the representational sample

  • Auditing the archive.
  • Choosing a representative sample.

 Stabilising and migrating

  • Reviewing content to assess problems and risk
  • Stabilise again with a view to rectifying problems
  • Cataloguing and naming.
  • Planning for future accessibility and interpretation.
  • Extracting metadata.
  • Prototyping a search interface to provide access to the archive (with Mark Osbourne from Nooode).

Data integrity is paramount in digital preservation and requires utmost scrutiny when dealing with 'born digital' artworks, where every aspect of the artists original intentions should be considered a matter for preservation and any re-presentation of a digital artwork can be regarded as a reinterpretation of the work.

In all cases, the most urgent work was the migration of data to stabilise and secure it. Amongst the wide range of formats we hold, CDs and CD ROMs are prone to bit rot and other magnetic formats can degrade gradually or be damaged by electrical and environmental conditions or easily damaged during attempts to read or playback.

The majority of our preservation work was to migrate from a wide variety of formats to hard drive, essentially consolidating our collection into one storage medium, which is then duplicated as a part of a back up routine.

Our research focused on the following 6 areas

  1. Macromedia Director Projector files
    • Migrating obsolete files and addressing compatibility issues.
  2. DV Tapes
    • Migrating DV tapes and transcribing panel sessions with a view to researching how transcriptions could be used for text based searches of video content, and how this can be embedded as subtitles using YouTube.
  3. Restoring Lovebytes website
  4. Developing naming systems for assets
  5. Prototyping a searchable web interface and exploring the potential for using ready-made, free and accessible tools for transcription dissemination.
  6. Writing a Business Case for Lovebytes Media Archive

We learned some valuable lessons on the way that we'd we like to share with likeminded organisations, especially those who have limited resources and are looking to preserve their own digital legacy on a tight budget.

Our findings have been compiled into a detailed report, providing a workflow model which makes recommendations for capturing, cataloguing and preserving material. It outlines our research into preserving artwork on obsolete formats and authoring platforms, as well as systems for transcription, data extraction and the use of metadata to increase accessibility of the archive.

We wanted to begin looking at the preservation issues for our collection and devise our own systems and best practice, therefore the recommendations reached for preserving digital assets in various media formats reflect the organisational needs of Lovebyes and might not align with another organisations goals.

Business Case

We used the Digital Preservation Business Case Toolkit to help us get started on our Business Case. This was a fantastic resource and helped us shape our Case and consider all the information and options we needed to include.

The Business Case will form the foundation for applications for public and private funding and will be tailored to meet specific requirements. Through writing this, we were able to identify the potential risks to the archive, its value and how we might restage artworks or commission artists to use data from it within the preservation process.

Conclusions

As non-experts in digital preservation we knew we were about to encounter some steep climbs and were initially apprehensive about what lay ahead, given that most of our material had been sat in a garage for ten years. Our collection, until then, had remained largely un-catalogued and aside from being physically sealed in oversized tupperware, the digital assets had been neglected. Many items were the only copy, stored in one location in danger of decay, damage or loss. As a small arts organisation recently hit by cuts to the art funding, Lovebytes and its archives were in a precarious position; unsupported and vulnerable.

The SPRUCE Award gave us the opportunity to take a step back and re-evaluate these assets, making us aware of their value and the need to save them and to start the preservation process. It has given us the opportunity to explore solutions and devise our own systems for best practice within the limited resources and funding options available to us.

It has allowed us to crystallize our thoughts around using the Lovebytes Media Archive to investigate digital archivism as a creative process and specifically how digital preservation techniques may be used to capture and preserve the curatorial shape and context of arts festivals.

By using available resources and bringing in external expertise where necessary, we found this process rewarding both in terms of developing new skills and also reaffirming in terms our past, current and future curatorial practice.

Having undertaken this research we now feel positive about the future of the archive and have a clear strategy for preservation and a case to take to funders and partners to secure it as an exemplar digital born archive project which attempts to capture preserve and represent the history of Lovebytes as a valuable record of early international digital arts practice at the turn of the century.

Jon Harrison and Janet Jennings of Lovebytes, and Mark Osbourne of Nooode

Preservation Topics: SPRUCE
Categories: Planet DigiPres

10 Tips To Preserve Your Holiday Digital Memories

The Signal: Digital Preservation - 27 November 2013 - 3:18pm

During Thanksgiving and the rest of the holiday season, you might take photos and video of friends and loved ones. You might make audio recordings of voices, conversations and music. Whatever you photograph or record, we hope you will take time to backup and preserve your digital stuff.

Thanksgiving on Flickr by martha chapas95

Thanksgiving on Flickr by martha chapas95

  1. As soon as you can, transfer the digital files off the camera, cell phone or other device and onto backup storage. That storage could be your computer, a thumb drive, a CD, a hard drive or an online cloud service. You should also backup a second copy somewhere else, preferably on a different type of storage device than the first.
  2. If you have time, browse your files and decide if you want to keep everything or just cull the best ones. Twenty photos of the same scene might be unnecessary, no matter how beautiful the scene might be. And despite who is in that video, if the video is blurry and dark and shaky, you probably will never watch it again.
  3. When you back your files up, organize them so you can easily find them. You can rename files without affecting the contents. And renaming a file will help you find it quickly when you search for it later.
  4. Organize file folders however you want but be consistent with your system. Label folders by date, description or file type (such as “Photos” or “Thanksgiving 2013″). Organization makes it easy to find your stuff later.
  5. You can add descriptions to your digital photos, much as you would write a description to a paper photo. We’ve gone into depth in few blog posts, to describe how it works.
  6. Similarly, if you make any digital audio recordings, you can add descriptive information into the audio files themselves, information that will display in the MP3 player.
  7. If you have a special correspondence with someone, you can archive the emails and cell phone texts much as you would a paper letter or card.
  8. Remember that all storage devices eventually become obsolete; maybe you can recall devices and disks from just a decade ago that are now either obsolete or on their way out of fashion. If you have valuable files still on those obsolete media, those files become increasingly difficult to access with every passing year.
  9. So in order to keep your files accessible, you should move your collection to a new storage medium about every five to seven years. That is about the average time for something new and different to come out. At the least, if you use the same backup device frequently — like a favorite thumb drive — get a new one.  Migrate your collection to new media periodically.
  10. Write down where you have important files, along with any passwords needed to access them, and keep that information in a secure place that a designated person can access if you aren’t around. Allow your memories to live on!

Treat your digital files responsibly, preserve those memorable moments and you can enjoy them again and again for years.

For more information on personal digital archiving, visit digitalpreservation.gov/personalarchiving/.

Categories: Planet DigiPres

Personal Digital Archiving 2014: Building Stronger Personal Digital Archiving Communities

The Signal: Digital Preservation - 25 November 2013 - 7:42pm
2.7 meg file, by flickr user s2art

2.7 meg file, by flickr user s2art

There is a growing community of individuals who are interested in the preservation of personal digital information.  Those individuals may include professionals working in libraries and archives who are receiving personal collections, scholars working with their own research materials and data, commercial companies working on consumer products to help people organize and save their digital content, and other people who create multitudes of personal digital content for various reasons.   They come together annually to share practical solutions to preserving and archiving all types of personal digital content.

Personal Digital Archiving 2014 will be held at the Indiana State Library in Indianapolis, Indiana, April 10-11, 2014.  This is the first time the conference will be held in the Midwest.  It was previously held San Francisco, California (2010-2012) and in College Park, Maryland (2013.)

The Personal Digital Archiving conference explores the intersections between individuals, public institutions, and private companies engaged in the creation, preservation, and ongoing use of the digital records of our daily lives. The conference reflects upon the current status of personal archiving, its achievements, challenges, issues, and needs as evidenced through research, education, case studies, practitioner experiences, best practices, the development of tools and services, storage options, curation, and economic sustainability. There is also interest in the role of libraries, archives and other cultural heritage organizations in supporting personal digital archiving through outreach or in conjunction with developing community history collections.

Some of the issues the conference committee is looking for the community to explore together are:

  • How do we preserve the ability to access digital content over time when every app/community/network has a lifecycle that involves the end of its existence?
  • How should libraries, museums and archives collect personal digital materials? How do we better share our knowledge and communicate about our work (including the failures as well as the successes)?
  • How are archivists, curators, genealogists using born-digital and/or digitized material in their research?
  • How can individuals be encouraged to undertake personal digital archiving activities?
  • What are effective strategies and best practices for personal digital archiving in social media and ecommerce settings?
  • What tools and services now exist to help with personal archiving? What do we need to make the process easier or more effective?

If you’re working with personal digital archives, please consider sharing your work at PDA2014.  The call for proposals is open and the submission deadline is December 2.

For those interested in attending, registration will open early in the new year on February 1, 2014.

PDA2014 is sponsored by the Indiana State Library and NDIIPP, in collaboration with the Coalition for Networked Information.

Categories: Planet DigiPres

From Analog to Digital: A Changing Picture of the Kennedy Assassination

The Signal: Digital Preservation - 22 November 2013 - 9:35pm

The first images I recall of the Kennedy Assassination are grainy black and white television broadcasts. I was in the fourth grade 50 years ago today, and after an anguished announcement on the public address system, we were sent home.

The TV was on in the living room with solemn reports. What followed over the next few days was a stunning flow of amazing events, all rendered in a few hundred flat lines of grey tones. I remember a strange mix of feelings, awash in horrible facts relayed by reassuringly familiar news correspondents. Those sober faces, rendered the same way as the thousands of hours of TV I had already consumed, helped me accept what had happened. Maybe it was my youth, but even the repeated rebroadcast of disturbing video clips–Jack Ruby’s shooting of Oswald in particular–eventually became an acceptable, if terribly sad, part of reality.

Aftermath of the shooting in Dallas, Cecil Stoughton. White House Photographs. John F. Kennedy Presidential Library and Museum, Boston

Aftermath of the shooting in Dallas, Cecil Stoughton. White House Photographs. John F. Kennedy Presidential Library and Museum, Boston

The Zapruder film upended that complacency. I first saw frames from the film in Life magazine shortly after the shooting, but their impact was minimal. They were static and in black and white. The full color version of the film was kept from public view for many years due to intellectual property restrictions, and it wasn’t until 1975 that it had a widespread public viewing. But even then most people saw the film on distinctly non-HD television, and perhaps not in color.

I didn’t see the film clearly until 1991 when it was used as part of the movie JFK. The lurid Kodachrome colors, the oddly intimate home movie jerkiness, the abrupt transition from banal to horrific–the film was a waking terror dream, something that couldn’t be happening actually was happening.

The nightmare quality was further enhanced by radical differences the film had from the original TV coverage: overly saturated colors in contrast with drab black and white; eerie silence in contrast with the soothing voices of newscasters; powerful, gut-churning visual reality in contrast with calm narrative descriptions.

With the internet came another change in my visual impression of the assassination. Beforehand, the Zapruder imagery was not in plain sight. But with digital versions proliferating on the web, the film was suddenly much more available in all kinds of different ways. It regularly showed up as images or clips in news stories and in essays; it was dissected in academic papers (such as A 3-D Lighting and Shadow Analysis of the JFK Zapruder Film (Frame 317) (PDF).  It became a staple of sites dedicated to video content, and anyone with a internet connection can view titles such as The Undamaged Zapruder Film, Zapruder Film Slow Motion (HIGHER QUALITY) or The Inky Face Trajectory In The Zapruder Film.

All this has altered my visual model of the assassination. I’ve moved from a purely rational, analog-based acceptance from what I originally saw on TV to a digitally-driven sense that the event lives in some strange, uncomfortable zone that resists clear-cut recognition or acknowledgement. While I have never seen compelling evidence of a conspiracy, I can easily see why people are drawn to the idea. Those 26.6 Zapruder seconds have a strange hallucinatory impact that seemingly builds each time you watch. It’s natural to try and explain what looks a delusion, especially one that streams over and over again to your own computer screen.

 

Categories: Planet DigiPres

On the Road with FADGI: Recent Conference Presentations Highlight Current Audio and Video Projects

The Signal: Digital Preservation - 21 November 2013 - 4:40pm

One of the best things about the Federal Agencies Digitization Guidelines Initiative is that we are a community-oriented group. We work together to bring about solutions to real-world problems. Our efforts are focused on defining common guidelines, methods and practices for federal agencies digitizing historical content, and the impact of our projects and products often extends beyond the government sector into the wider audio and moving image preservation communities.

This fall, two of our FADGI Audio-Visual Working Group members hit the conference circuit to discuss some of our current efforts, and we couldn’t be more pleased by the positive responses.

//preservation.bavc.org/artifactatlas/index.php/Interstitial_Error Photo courtesy of AudioVisual Preservation Solutions from the FADGI Interstitial Error Study Volume I. The Study Report

The Interstitial Error is visible in the top row; the two rows would have the exact same waveform shape if there was no error. To hear an Interstitial Error, check out the AV Artifact Atlas.
Photo courtesy of AudioVisual Preservation Solutions from the FADGI Interstitial Error Study Volume I. The Study Report

In late October, FADGI’s work in audio preservation was highlighted at the Audio Engineering Society’s 135th International Convention in New York City. One of our expert consultants, Chris Lacinak of AudioVisual Preservation Solutions, included FADGI projects in his tutorial about audio performance systems testing. Part of the workshop covered the problem of Interstitial Errors (PDF), a term Chris coined to describe momentary artifacts caused by failure in a digital audio workstation’s writing of data to a storage medium which result in both lost content and a disruption in file integrity.

The workshop also illuminated the topic of analog-to-digital converter performance testing, highlighting the FADGI 2012 guideline on ADC metrics and testing, a document that built upon two foundational publications – the 2009 Guidelines on the Production and Preservation of Digital Audio Objects (TC04) from the International Association of Sound and Audiovisual Archives and the Audio Engineering Society’s AES-17: AES standard method for digital audio engineering — Measurement of digital audio equipment.

The FADGI 2012 guideline (PDF) will also serve as the starting point for a formal standards project by the AES Working Group on Digital Audio Measurement Techniques (SC-02-01), a project that will address both the development of test methods and performance criteria for the ADCs used in audio preservation systems. The prospect of an official standards project focused on the topic of Interstitial Errors is currently under discussion within this same working group.

Courtney Egan presenting the reformatted video matrix at the AMIA Poster Session.  Photo by Kate Murray

Courtney Egan presenting the reformatted video matrix at the AMIA Poster Session.
Photo by Kate Murray

In early November, FADGI work was again on display at the Association of Moving Image Archivists Annual Conference in Richmond, Virginia. Courtney Egan from the National Archives and Records Administration’s Audio-Video Preservation Lab participated in a poster session about the eagerly anticipated and very-soon-to-be-released-for-public-comment matrix which compares target wrappers and encodings against a set list of criteria that come into play when reformatting analog videotapes.

As mentioned in a previous blog post, the evaluation attributes in the matrix include format sustainability, system implementation, cost and settings and capabilities. Some features specific to video are also evaluated, such as the ability to store multiple or discontinuous time codes and the ability to support different color spaces and bit depths. The Working Group hopes that the matrix will be a helpful tool for those faced with the challenging choice of what target format they should use when migrating their legacy videotapes.

So what does all this mean for the future of the FADGI Audio-Visual Working Group? Both presentations were extremely well received. Chris’ tutorial made front page news in the AES Show Daily newspaper and Courtney’s poster session was mobbed. We’re proud, of course, that our efforts are helpful for our federal agency constituents. But we are thrilled that our work is appreciated and embraced by the audio and moving image preservation communities at large. Our collaborative approach to solving shared problems through community-based solutions is working – for everyone – and we wouldn’t have it any other way.

Categories: Planet DigiPres

Residency Program: From the Classroom to the Workplace

The Signal: Digital Preservation - 20 November 2013 - 3:07pm

The following is a guest post by Lyssette Vazquez-Rodriguez, Program Support Assistant & Valeria Pina, Communication Assistant, both with the Office of Strategic Initiatives at the Library of Congress

Residents in the inaugural class of the National Digital Stewardship Residency program have been busy at their host institutions since mid-September. The residents agree that during their first weeks of work they did what they know best: research.

Residents-Jefferson Building

This year’s class of Residents. (Photo credit: Molly Schwartz)

Jaime McCurry, resident at the Folger Shakespeare Library, explained, “Right now my work is very research-oriented. Over the course of the residency, I am preparing an annotated bibliography on various resources related to Web Archiving. I’m looking to provide an overview of the current landscape and also to find interesting sources pertaining to Web Archiving in the humanities, specifically. I’ve also performed Quality Assurance tasks on the Folger’s current Web Archive collections and I am in the process of discussing new collections to be added with our Collection Development team.”

  Molly Schwartz)

Molly Schwartz (Photo credit: Molly Schwartz)

Erica Titkemeyer, resident at Smithsonian Institution Archives, who is working with time-based media and art, explains that, “a typical day at my office tends to be low-key, since I work alone researching at my own workstation. As of now I have carried out a significant amount of research related to the current state of time-based media art (works of art which depend on technology and have duration as a dimension) to conservators within museum settings.”

In addition to research, some of the residents have had the opportunity to attend conferences and network with scholars from the field of digital preservation. Molly Schwartz, who is a resident at the Association of Research Libraries, attended a lecture of Dr. Jonathan Lazar, Professor of Computer and Information Sciences at Towson University.

  Molly Schwartz)

Margo Padilla (photo credit: Molly Schwartz)

Margo Padilla, a resident at the University of Maryland, said, “I recently conducted several interviews with electronic literature scholars on their expectations for access to born-digital literary collections. These interviews will help inform the development of the access models I will produce by the end of the residency.”

This is only just the beginning of the residency; the residents are very thrilled with what they have been doing so far and they are eager to continue learning and helping their host institutions complete their objectives.

Categories: Planet DigiPres

The OPF Appoints New Executive Director

Open Planets Foundation Blogs - 20 November 2013 - 8:55am
The Board of the OPF has appointed Ed Fay as the new Executive Director. Ed will join the OPF in  February 2014 and will lead the organisation in its efforts to address its members' digital preservation  challenges with a practical, and community-led approach.  Ross King, Chair of the Board, said: "The Board was extremely gratified to receive qualified applications  from Europe, the Middle-East, India, and the United States. Four top candidates were selected by all  board members and were interviewed personally by a board sub-committee. After evaluating the these candidates, support for Ed Fay within the committee and the OPF board was unanimous. Ed has demonstrated his understanding of the different challenges facing both libraries and archives and has a refreshing take on digital preservation from an institutional perspective. We look forward to working with him to enhance the visibility and reputation of the OPF and to create more value for its members". Ed commented on his appointment: "I’m thrilled to join the OPF and contribute towards the development of digital preservation practice at an important time for libraries, archives, and memory institutions everywhere. The OPF’s mission is to enable collaboration and shared solutions and I look forward to working with members and the wider community to build capacity for the digital collections of the future" Before being appointed by the OPF, Ed has been the Digital Library Manager of the London School of Economics (LSE) for 5 years. He successfully managed the development of LSE’s digital library from its inception to implementation. He also led digital preservation activities at LSE and their participation in a number of related projects and working groups. Prior to this he worked on several mass digitisation projects funded by JISC. Ed will take over the role from Bram van der Werf who has managed and grown the OPF from its foundation in 2010 to become a sustainable membership organisation.

 

Preservation Topics: Open Planets Foundation
Categories: Planet DigiPres