Web Archiving

SCAPE & OPF Hackathon: Hadoop-driven digital preservation

The SCAPE Project and OPF are running a hackathon for developers and practitioners, focussing on Hadoop, an open source software framework that allows for the distributed processing of large data sets across clusters of computers using simple programming models. Hadoop is designed to scale out from single servers to thousands of machines.

Jimmy Lin from the University of Maryland will our guest speaker at the event. Jimmy has been working with Big Data and Hadoop for many years, with a focus on natural language processing and information retrieval. He spent an extended sabbatical at Twitter from 2010-2012 to work on large-scale analytics, on which he provides valuable insights in his 2013 HadoopsummitEU talk ‘Big Data Mining Infrastructure: The Twitter Experience” (http://www.youtube.com/watch?v=T5ZjSFnOxys). He has a book out on MapReduce (http://lintool.github.io/MapReduceAlgorithms/) and is currently working on a scalable rendering engine for web archives based on HBase.

Scenarios
We will be working with two digital preservation scenarios:

  • Web Archiving: File Format Identification/Characterisation
  • Digital Books: Quality Assurance, text mining (OCR Quality)

Alternatively, if you have something else you would like to work on using Hadoop, just let us know, we are keen to hear your ideas.

*Competition*
Practitioners and developers will work together in groups to address digital preservation challenges using Hadoop. Practitioners will take the role of issue champion, and will articulate their requirements to the developers and document them on the wiki. Developers will brainstorm ideas, and work on solutions to the issues. There will be regular check in points to get feedback and refine requirements. There will be prize for the best issue champion and development solution.

All participants will gain practical experience of using digital preservation tools in characterisation and quality assurance processes. We will provide step-by-step worksheets for those who are less familiar with using the command line, and our experts will be on hand to help you through them.

There will be plenty of opportunities for discussion. We have a session for sharing experiences implementing Hadoop at your organisation, research project reports and a break out space for lightening talks. We welcome suggestions for talks or discussions you would like to hear about.

Agenda

The draft agenda can be seen at: http://wiki.opf-labs.org/display/SP/Agenda+-+Hadoop+Driven+Digital+Preservation

Who should attend?

Practitioners (digital librarians and archivists, digital curators, repository managers, or anyone responsible for managing digital collections) You will learn how Hadoop might fit your organisation, how to write requirements to guide development and gain some hands on experience using tools yourself and finding out how they work. To get the most out of this training course you will ideally have some knowledge or experience of digital preservation.

Developers of all experience can participate, from writing your first Hadoop jobs, to working on scalable solutions for issues identified in the scenarios.

Registration

Please register here: https://hadoop-driven-digital-preservation.eventbrite.co.uk.

OPF members are invited to attend free of charge. Please use the code issued by email to waive the fee.

Non-members are welcome to attend at a cost of €200. Morning and afternoon coffee breaks and lunch will be provided and are included in the registration fee.

*Early bird rate* register before 25 October to get 10% off.

Registration will close on Monday 25 November.

For information about travel and accommodation please visit the event wiki page: http://wiki.opf-labs.org/pages/viewpage.action?pageId=32604217.

Date: 
2 December 2013 to 4 December 2013
Event Types: 

Preservation capabilities: How to assess? How to improve?

Digital Preservation is making certain progress in terms of tool development, progressive establishment of standards and increasing activity in user communities, but there is a wide gap of approaches to systematically assess, compare and improve how organizations go about achieving their preservation goals.

OPF Hackathon – A Practical Approach to Preservation Systems

The next OPF Hackathon – A Practical Approach to Preservation Systems – will focus on preservation workflows and systems. It will consider a large number of widely available tools that are being used both by the community and in industry.

Identification tools, an evaluation

We have created a testing framework based on the Govdocs1 digital Corpora (http://digitalcorpora.org/corpora/files), and are using the characterisation results from Forensic Innovations, Inc. ((http://www.forensicinnovations.com/), as ground truths. We have tested Tika 1.0, Fido 0.9.6 and Droid 6.0 with the V45 signature file. Tika generally performs best for all the 20 most common formats. Especially for text files (text/plain), it is the only tested tool that correctly identifies the files. Tika is the fastests of the tools, and Fido is the slowest.