It is a critical time in the process of setting up the APA Centre of Excellence for Digital Preservation, the culmination of more than 5 years’ work, and so it seems the right time to write a post about it.
The APA’s strategic plan laid out a roadmap for its expansion. This was to be achieved largely through EU projects. Preliminary work was done through CASPAR, PARSE.Insight and ODE. Their results (their deliverables are on the APA site) are being carried forward in the two projects I would focus on here.
The first of these projects is APARSEN, a network of excellence with more than 30 partners; its aim is to integrate the various, often inconsistent, strands of research in digital preservation, and to help create the Centre of Excellence. It’s website is part of the APA’s site. The second, complementary, project is SCIDIP-ES. This is creating a set of services which can supplement the digital preservation capabilities of existing repositories, bringing them nearer to the level required for a successful ISO16363 audit. The SCIDIP-ES website is separate from the APA’s at the moment but the software is being handed over to the APA right now.
At the time the strategic plan was drawn up one of the aims was to move beyond digital preservation research into industrial strength digital preservation and even beyond that. By this I mean that the value of our digital capital was of course recognised already but the demand for increasing value was also growing. If, following OAIS, digital preservation is defined as maintaining the understandability and usability of digital objects by a specified community, then there is a natural extension towards enabling the digital object to be used by a broader community i.e. to be more usable by more people, thereby becoming more valuable.
What was not really anticipated was that EU funding for “blue-sky” digital preservation research would essentially come to an end. On the other hand since more than 100M Euros has been invested by the EU in digital preservation research it might have been expected that payback would be expected.
The question “who pays for digital preservation and why?” was increasingly heard, and there were insufficient answers. Studies such as that of the Blue Ribbon Task Force suggested ways to look at the issue. The Riding the Wave report , to which I contributed, recognised the issue but did not address it.
This has encouraged APARSEN to produce an integrated view of digital preservation within a business process. A key aspect of this is provided by SCIDIP-ES because that project has created tools and services which add to value through increased usability.
It has taken several years to put all these pieces in place. APARSEN has investigated many of the silos of digital preservation research, collecting, evaluating and integrating the many isolated pieces of research – research which used different terminology, concepts and approaches. SCIDIP-ES has created a solid collection of customisable software tools and services. In additional the ISO16363 has been created, by myself and the PTAB team, and may provide the basis for a real market for digital preservation.
Over the next several months the things which the Centre of Excellence will offer will be put in place including training, software services, tools and consultancy. These will be provided from the outcomes of the projects as well as by APA members.
It is certainly going to be an interesting time.
The workshop consists of a number of outcomes of the APARSEN project. APARSEN is a European project that runs until the end of this year, aimed at the establishment of a Virtual Centre of Excellence on digital preservation. The project partners gathered expertise on a wide range of digital preservation issues clustered in four topics: (1) Trust, (2) Sustainability, (3) Accessibility, and (4) Usability. The workshop is focused on the last two topics covered in the APARSEN project. The most important outcomes of the work done in the project are presented and discussed in the workshop.
The usability aspect of digital preservation is covered by a presentation on interoperability and intelligibility. The focus in this presentation will be on solutions for semantic interoperability in order to keep data understandable and processable in the long run.
The accessibility aspect of digital preservation is covered by three presentations. The first contribution concerns the importance of persistent identifiers to provide durable access to digital objects. More specifically in this presentation attention is paid to an interoperability framework that enables seamless access to digital objects that have two or more persistent identifiers. Two other contributions in the access topic concern an analysis of digital rights and access management of digital objects and a presentation on policy issues relevant for digital preservation. Both presentations pay attention to the outcomes of a survey carried out in the framework done in the APARSEN project.
Duration: 09:00 – 12:30; 2 July 2014
Each presentation takes 40 minutes and includes at least 15 minutes for discussion.
09:00-09:10 Introduction on the workshop. Background on the APARSEN project and context of the topics covered in the workshop
Download: LIBER 2014 – Workshop on Usability and Accessibility and Aspects of Digital Preservation
09:10-09:50 Interoperability and Intelligibility (Yannis Kargakis, Forth, Greece)
Download: LIBER 2014: Interoperability and Intelligibility
09:50-10:30 Persistent identifiers (Maurizio Lunghi, FRD, Italy)
Download: LIBER 2014: Persistent Identifiers
10:30-11:00 Coffee break
11:00-11:40 Digital rights and access management (Stefan Hein, DNB, Germany)
Download: LIBER 2014: Digital rights and access management
11:40-12:20 Data policies and governance (Juha Lehtonen, CSC, Finland)
Download: LIBER 2014: Data Policies (and Governance)
12:20-12:30 Wrap-up and conclusions
Prof. Paolo Bouquet, Trento, Italy
Paolo Bouquet, is professor of computer science at the Management School of the University of Trento (Italy). In addition to his substantial academic record, he has been successfully creating and leading large, publicly funded research projects, including FP/ Integrated Project OKKAM – Enabling the Web of Entities. Paolo is one of the initiators of the open data initiative in the Trentino Region (“Trentino Open Data”). Paolo is Chairman of the Board and co-founder of OKKAM s.r.l. In such a role, Paolo has led projects in the areas of tax collection, public transportation systems, data integration, and tourism.
Paolo was the Director of the DIGOIDUNA Study, an EC report on “Digital Object Identifiers and Unique Authors Identifiers to enable services for data quality assessment, provenance and access”, which is currently a key document in the H2020 strategy in the area of Persistent Identifiers for e-Infrastructures.
Yannis Kargakis, Forth, Greece
Yannis Kargakis is currently a Research & Development Engineer in Information Systems Lab at FORTH-ICS (Greece). He conducted his undergraduate and graduate studies (MSc) in the Computer Science Department at University of Crete. His research interests fall in the following areas: Information Systems, Digital Preservation, Semantic Web, and Dependency Management Reasoning. He has participated in the research projects: APARSEN NoE and SCIDIP-ES.
Maurizio Lunghi, FRD, Italy
Maurizio Lunghi has a degree in Electronic Engineering, Telecommunications and Telematics – Internet technology and networking. He has experience of working with international projects funded by the European Commission where he has worked for 3 years as project officer in the DG INFSO in Luxembourg. Organization of events and coordination of expert groups are very common in his experience. He has participated in research projects on networking and ICT, high resolution imaging, telemedicine applications, on digital libraries and digital preservation related issues. From 2010 he is involved in the APARSERN project. His interest and activity lie in the area of persistent identifiers systems, trusted digital repositories criteria, audit and certification.
Stefan Hein, DNB, Germany
Stefan Hein is a Software Developer in the context of processing digital objects and their Digital Preservation at the DNB since 2010. He graduated with a Diploma in Computer Science at the Humboldt University at Berlin. The current main focus of his work is the further development of the ingest workflow for example with capabilities for validating and identifying digital objects and their long term preservation.
Juha Lehtonen, CSC, Finland
Dr. Juha Lehtonen works as an Applications Architect in CSC – IT Center for Science, Finland. He joined CSC in 2012 to create digital preservation solutions for cultural heritage of Finland, and he is also involved in APARSEN. In 2009-2012 he was founding a digitization center for natural history collections of Finland, where he was planning and implementing the technical side of the digitization processes. In 2005-2009 he worked as a researcher in the University of Joensuu until he received his Ph.D. in Computer Science.
APARSEN/SCAPE Project Satellite Event – Long term accessibility of digital resources in theory and practice
Vienna, Austria – 21st Mai 2014
It took place in the context of the 3rd LIBER Workshop on Digital Curation “Keeping data: The process of data curation” (19-20 May 2014)
An overview on management aspects such as digital rights management, policies and costs as well as technical aspects with a focus on preservation planning and scalability in digital preservation will be given. Insights into the day-to-day practice of digital preservation will foster the understanding of theoretical concepts developed in the two EU funded projects.
The programme was as follows:
09:00 – 10:30
- Sabine Schrimpf (German National Library): Digital Rights Management in the context of long-term preservation
- Ross King (Austrian Institute of Technology): The SCAPE Project and Scalable Quality Control - Ross King
- David Wang (SBA Research): Understanding the Costs of Digital Curation - David Wang
11:00 – 12:30
- Sven Schlarb (Austrian National Library): Application scenarios of the SCAPE project at the Austrian National Library - Sven Schlarb
- Krešimir Đuretec (Vienna University of Technology): SCAPE Preservation Planning and Watch - Kresimir Duretec
- Ruben Riestra (INMARK): Answering Key Questions - Ruben Riestra
Austrian National Library
If you were not able to make it to the conference in New Delhi you can still it all!
The videos, presentations, discussions and papers are all available here
I’ve been struggling to find a way to mark the passing of UKOLN, or at least UKOLN as we knew it (I’m not sure whether the remaining rump is still called UKOLN; the website has not been updated with much if any information about the changes that occurred on 1 August, as of this writing). I enjoyed the tweet-sized memories yesterday under the #foreverukoln hashtag. The trouble is, any proper marking of UKOLN needs more than a tweet, more than a post, more even than a book. And any less proper marking risks leaving out people who should be thanked.
But, I can’t just leave it unmarked. So you have to accept that this is just some of the things I’ve appreciated from UKOLN, and names just some of the many people from UKOLN who have helped and supported me. If you’re left out, please blame my memory and not any ill-intent, but also note this doesn’t attempt to be comprehensive.
So here’s the first thing. I’ve found in my store of ancient documents the text of the draft brochure for the eLib Programme, written in 1995 or 1996 (some of you will remember its strange square format and over-busy blue logo). Right at the bottom it says:
“The eLib web pages are maintained by UKOLN and can be found at
Now (currently at least, if you click on that link it will still work, redirecting you to http://www.ukoln.ac.uk/services/elib/. There have been multiple changes to the UKOLN website over the years, and they have always maintained the working links. I don’t know most of the people who did this (though Andy Powell and Paul Walk both had something to do with it), but my heartfelt thanks to them. Those readers who work anywhere near Library management or Library systems teams: PLEASE demand that prior URIs continue to work, when getting your websites updated!
The first phase of the eLib programme had around 60 projects, many of them 3 year projects. As we moved towards the second and third phases, the numbers of projects dropped, and it was clear that the UK’s digital library movement was losing many people with hard-won experience in this new world. (In fact, we were mainly losing them to the academic Libraries, so it was not necessarily a Bad Thing.) I remember trying to persuade JISC that we needed a few organisations with greater continuity, so we wouldn’t always have new project staff trying to learn everything from the ground up. Whether they listened or not, over the years UKOLN provided much of that continuity.
Another backroom group has also been hugely important to me. Over the 15 years I was working with them, UKOLN staff organised countless workshops and conferences for eLib, for JISC and for the DCC. These staff were a little better publicly known, as they staffed the welcome desks and communicated personally with many delegates. They were always professional, courteous, charming, and beyond helpful. I don’t remember all the names; I thank them all, but remember Hazel Gott from earlier andNatasha Bishop and Bridget Robinson in more recent times.
A smaller group with much higher visibility would be the Directors of UKOLN. Lorcan Dempsey was an inspired appointment as Director, and his thoughtful analyses did much to establish UKOLN as a force to be reckoned with. I’d never met anyone who read authors like Manuel Castells for fun. I was a simple-minded, naïve engineer, and being in 4-way conversations with Lorcan, Dan Greenstein of the AHDS, and John Kelleher of the Tavistock Institute, larded with long words and concepts from Social Science and Library Science, sometimes made my brain hurt! But it was always stimulating.
When Lorcan moved on, the role was taken by Liz Lyon, whom I had first met as project coordinator of the PATRON project at the University of Surrey. A very different person, she continued the tradition of thoughtful analyses, and promoted UKOLN and later the DCC tirelessly with her hectic globetrotting presentations. She was always a great supporter of and contributor to the DCC, and I have a lot to thank her for.
One of the interesting aspects of UKOLN was the idea of a “focus” person. Brian Kelly made a huge impact as UK Web Focus until just yesterday, and though our paths didn’t cross that often, I always enjoyed a chat over a pint somewhere with Brian. Paul Miller, if I remember right, was Interoperability Focus (something to do with Z39.50?), before moving on to become yet another high-flying industry guru and consultant!
That reminds me that one of my favourite eLib projects was MODELS (MOving to Distributed Environments for Library Services, we were big on acronyms!), which was project managed by Rosemary Russell, comprising a series of around 11 workshops. The second MODELS workshop was also the second Dublin Core workshop, so you can see it was at the heart of things. Sadly at the next workshop I coined the neologism “clumps” for groups of distributed catalogues, and nobody stopped me! We chased around a Z39.50 rabbit hole for a few years, which was a shame, but probably a necessary trial. Later workshops looked at ideas like the Distributed National Electronic Resource, information architectures, integrated environments for learning and teaching, hybrid environments, rights management and terminologies. And the last workshop was in 2000! Always huge fun, the workshops were often chaired by Richard Heseltine from Hull, who had a great knack for summarising where we’d got to (and who I think was involved directly in UKOLN oversight in some way).
Rachel Heery also joined UKOLN to work on an eLib project, ROADS, looking at resource discovery. She had a huge impact on UKOLN and on many different areas of digital libraries before illness led to her retirement in 2007 and sadly her death in 2009. The UKOLN tribute to her is moving.
UKOLN did most of the groundwork on eLib PR in the early days, and John Kirriemuir was taken on as Information Officer. I particularly remember that he refused to use the first publicity mugshot I sent; he told me over the phone that when it opened on his PC someone in the office screamed, and they decided it would frighten small children! I think John was responsible for most of the still-working eLib website (set up in 1995, nota bene Jeff Rothenberg!).
Ariadne has become strongly identified with UKOLN, but was originally suggested by John MacColl, then at Abertay, Dundee and now St Andrews, and jointly proposed by John and Lorcan as a print/electronic parallel publication. John Kirriemuir worked on the electronic version in the early days, I believe, later followed by Philip Hunter and Richard Waller, both of whom also worked on IJDC (as also did Bridget Robinson). Ariadne is a major success; I am sure there are many more who worked on making her so, and my thanks and congratulations to all of them.
Most recently I interacted with UKOLN mostly in terms of the DCC. As well as Liz and those working on IJDC, Alex Ball, Michael Day, Manjula Patel and Maureen Pennock made major contributions, and wrote many useful DCC papers.
Last but by no means least, we tend to forget to thank the office staff behind the scenes. I don’t remember most names, my sincere apologies, but you were always so helpful to me and to others, you definitely deserve my thanks.
… and to so many more UKOLN staff over the years, some of whom I should have remembered and acknowledged, and some of whom I didn’t really know: thanks to you from all of us!
At Conterpoint 2013, I put out two free CDs of FTL that people could take under the condition that they write a public review, and both of them were taken. I don’t know who these people are, or whether they’ve published a review yet or are still working on it, but if you’re one of them, could you give me a link to the review when it’s available?
From now through July 4, you can get Files that Last on Smashwords at 40% off by entering the coupon code XE93J. The 4th of July is a day not only for fireworks but for thinking about historical documents, and for making sure they survive even if they accidentally turn into fireworks.
OK, it’s not light summer reading, but brushing up your preservation skills never hurts.
Thanks to Deb Wunder for giving FTL a mention.
— otherdeb (@otherdeb) June 30, 2013
From Juan Bicarregui (STFC):
The link below leads to this week’s statement by G8 ministers on “international issues that require global cooperation”.
The four topics discussed were:
- global challenges
- global research infrastructure
- open scientific research data
- increasing access to the peer-reviewed, published results of scientific research
The third and fourth topics will be of particular interest to APA and APARSEN members.
Interoperability of Persistent Identifiers Systems – services across PI domains
Date: Thursday, 5th September 2013, afternoon
Location: IST – Instituto Superior Técnico in Lisbon, Portugal
Maurizio Lunghi e Emanuele Bellini (Fondazione Rinascimento Digitale/APARSEN), Renè Van Horik (DANS), Barbara Bazzanella e Paolo Bouquet (UNITN), Bas Cordewener (Knowledge Exchange), Anila Angjeli (ISNI), Giovanni Bergamin (Central National Library in Florence), Norman Pasquin (LCC and DOI foundation), John Kunze (California Digital Library), Tobias Weigel (RDA), Antoine Isaac (Europeana), (NN, EUDAT)
The central goal of this second edition of the workshop on Interoperability of Persistent Identifier Systems (www.rinascimento-digitale.it/workshopPI2012) is to bring together representatives from different PI communities to discuss potential benefits for final users as well as challenges requirements and technologies to implement an effective interoperability solution for different PI systems and related services. Supporters of this workshop proposal and the experts in the programme committee represents large and significant PI user communities, other experts are
A first section is devoted to users and to potential services and benefits for final users that could be built on such an interoperability framework. Participants are involved in the description of future user scenarios and potential applications of the PI systems, making evident user benefits and requirements.
A second section is focused on technical aspects regarding the implementation of an interoperability solution and related services. As a starting point for the technical discussion, the new Interoperability Framework (IF) for PI systems, proposed by the APARSEN project and refined by a large group of independent experts is described and a demonstrator is presented. The IF model is suitable to all the different user requirements and is adoptable by all PI user communities.
Participants are invited to compare their requirements with the IF features and assumptions confronting on various aspects of the model, potential benefits and concrete terms for a common roadmap for the implementation of the framework in order to create consensus on to develop joint cross-domain applications.
Representatives of the most relevant PI initiatives and different PI user communities are invited to report on current activities and vision, but also on possible approaches to define interoperability solutions and services and expose their position towards needs and opportunities of moving toward the implementation of a comprehensive interoperability technological solution for PI systems.
Joint APARSEN/4C workshop:
What does it cost? – EU Activities to Assess the Cost of Digital Curation
Date: Thursday, 5th September 2013, afternoon
Location: IST – Instituto Superior Técnico in Lisbon, Portugal
Subject of the Workshop:
Digital preservation and curation activities tend to be costly and complex and require a long term commitment. Without reliable cost information and accurate cost models, it is hard to plan and calculate such activities seriously.
A number of digital curation cost models have been developed in the last years, and initiatives like Knowledge Exchange, the Digital Curation Centre, and the Blue Ribbon Task Force for Economically Sustainable Digital Curation have, among others, looked at cost and benefits of keeping digital data available for the long term. Most recently, the APARSEN project has provided a high level analysis of published cost models, it has reviewed cost parameters in relation to the trusted digital repositories standard, ISO 16363, and it has investigated the level of preparedness of research libraries to ensure economically-sustainable digital preservation.
The new EU project 4C – ‘the Collaboration to Clarify the Costs of Curation’ – draws all of these initiatives and their results together. It networks existing projects and initiatives and will ensure that where existing work is relevant, stakeholders realize it and understand how to employ it. A key aim for this workshop is to build bridges between ongoing costs-related initiatives to enable 4C to identify areas where good progress has been made and also to understand how current cost models might be augmented to improve ease of use and increase uptake. Ultimately, 4C will help organizations to approach their investment in data curation and preservation with greater certainty and with greater clarity about what they will get back in return. The project partners will use the workshop as an opportunity to set the scene for their topic, present their approach (“assess, enhance, engage”) and invite feedback of workshop participants.
A key point for the open discussion session will be to identify difficulties that ongoing costs-related initiatives have had in collecting cost information and encouraging use of their models. During the session we will invite input from these initiatives into how 4C might help to overcome these difficulties to realise increased uptake of the models and ultimately an improved understanding of curation costs.
Veronika Prändl-Zika, Austrian National Library, presented the APARSEN project in the course of the Metaday #59 at the Metalab in Vienna on June 7, 2013. For more details: https://metalab.at/wiki/Metaday_59
— Debbie Ridpath Ohi (@inkyelbows) June 7, 2013
Just for fun, here’s a video of me singing the “Files that Last” song at the Dartmouth College Library. The sound isn’t great, and it’s obvious why I never went for a career as a singer, but it was fun.
It’s mid-May, and graduations are already starting. Those of you who teach know it isn’t too soon to plan for the fall’s courses. If you’re teaching a course that touches on system management, data maintenance, or preservation issues, you should consider including Files that Last on its reading list.
Preservation Services at Dartmouth College offered a reading list in digital preservation in 2012. That list, which predates FTL, suggests several books which focus on preservation from an institutional standpoint. The Planets Project (which has become the Open Planets Foundation) has an older but longer bibliography in a similar vein. Files that Last complements books like these with its focus on a broader computer audience, the people who need to do preservation as an aspect of their regular work, rather than being primarily information curators.
If your students read Files that Last, it will help them understand the issues of data preservation and loss and appreciate the importance of good data maintenance practices, and they’ll learn habits that will let them better control the data in their own lives and their future jobs.
I’ve launched a page of updates and errata for Files that Last, with some new information on the WebP still image format. As I learn about things that have changed or mistakes in the book, I’ll add to the page.
If you spot anything that you think needs fixing, please let me know.
Smashwords was taking forever to get “technical integration” from Amazon, and when I got a query from a friend about Amazon availability, I decided to go with KDP (Kindle Direct Publishing). Amazon’s registration process isn’t more painful than you’d expect, given that they need to pay me and report my income, and the submission process gives me more control than Smashwords’ does, though it takes more work to take full advantage of it. (The best way to submit a book to KDP is as an HTML file with detailed CSS, and saving as HTML from OpenOffice gives you that. I had to make some manual changes to the CSS for a good result.) This means there are some differences in formatting between the Smashwords and KDP editions. There shouldn’t be any differences in content.
I’m not thrilled with Amazon’s commitment to DRM, closed platforms, and licensing rather than really selling e-books, but I don’t dislike them enough to cut myself off from that market. So if you’ve been holding out for the Kindle version, wait no more!
Yes, it’s only tres de mayo, but Sunday is a lousy day to hold a sale. Besides, today is International Day against DRM. For today through the 5th, you can get Files that Last on Smashwords — DRM-free, of course — for the super-low price of $3.20 instead of the usual $7.99. Enter the coupon code TT58Q when buying the book to get this price. If you already have it, why not buying a copy for a friend or colleague?
This applies only to copies bought on Smashwords, not on other sites. Sorry if you prefer to buy on the iTunes store, but I’m not able to issue coupons for other sites.
Correction: Earlier I’d listed $2.99. I wasn’t able to set the price directly on Smashwords, so I had to set it as a percentage off and made it 60% off, setting it to $3.20. Apologies to anyone who was annoyed by the discrepancy.
Files that Last is the first e-book on digital preservation directed at “everygeek.” In case your layout doesn’t show you the page links (e.g., on a mobile device), you can read what the book’s about and how to get it here.
I’ve spent the last few months looking at the JISC data management planning projects. It’s been very interesting. Data management planning for research is still comparatively immature, and so are the tools that are available to support it. The research community needs more and better tools at a number of levels. Here are my thoughts… what do you think?
At group or institution level, we need better “maturity assessment” tools. This refers to tools like:
- DCC CARDIO for assessing institutional readiness,
- the DCC Digital Asset Framework for understanding the landscape of data resources,
- repository risk assessment and quality assessment tools like DRAMBORA, Data Seal of Approval, etc
- security assessment tools including audits based on ISO 27000.
Some of the existing tools seem rather ad hoc, as if they had emerged and developed from somewhat casual beginnings (perhaps not well put; maybe from beginnings unrelated to the scale of tasks now facing researchers and institutions). It is perhaps now time for a tool assessment process involving some of the stake-holders to help map the landscape of potential tools, and use this to plot development (or replacement) of existing tools.
For example CARDIO and DAF, I’m told, are really tools aimed at people acting in the role of consultants, helping to support a group or institutional assessment process. Perhaps if they could be adjusted to be more self-assessment-oriented, it might be helpful. The DAF resource really needs to be brought up to date and made internally consistent in its terminology.
Perhaps the greatest lack here is a group-oriented research data risk-assessment tool. This could be as simple as a guide-book and a set of spreadsheets. But going through a risk assessment process is a great way to start focusing on the real problems, the issues that could really hurt your data and potentially kill your research, or those that could really help your research and your group’s reputation.
We also need better DMP-writing tools, ie better versions of DMPonline or DMP Tool. The DCC recognises that DMPonline needs enhancement, and has written in outline about what they want to do, all of which sounds admirable. My only slight concern is that the current approach with templates for funders, disciplines and institutions in order to reflect all the different nuances, requirements and advice sounds like a combinatorial explosion (I may have misunderstood this). It is possible that the DMP Tool approach might reduce this combinatorial explosion, or at least parcel elements of it out to the institutions, making it more manageable.
The other key thing about these tools is that they need better support. This means more resources for development and maintenance. That might mean more money, or it might mean building a better Open Source partnership arrangement. DMPonline does get some codebase contributions already, but the impression is that the DMP Tool partnership model has greater potential to be sustainable in the absence of external funding, which must eventually be the situation for these tools.
It is worth emphasising that this is nevertheless a pretty powerful set of tools, and potentially very valuable to researchers planning their projects and institutions, departments etc trying to establish the necessary infrastructure.
Organised by IEDA and Elsevier Research Data Services, the International Data Rescue Award in the Geosciences is created to improve preservation and access of research data, particularly of dark data, and share the varied ways that these data are being processed, stored, and used. For more information see http://researchdata.elsevier.com/datachallenge
The organisers are interested in receiving submissions from groups who have developed and completed projects that have digitized previously unavailable content or that have facilitated and improved the ingestion of research data. The final submission deadline is October 10, 2013.