Feed aggregator

At the Museum: An Interview with Marla Misunas of SFMOMA, Pt. 1

The Signal: Digital Preservation - 2 April 2014 - 1:44pm
Marla Misunas in her office

Marla Misunas in her office at SFMOMA. Photo by Kelly Parady.

At the Museum is a new interview series highlighting the variety of digital collections in museums and the interesting people working to create and preserve these collections. For this first installment, I interview Marla Misunas, Collections Information Manager for the San Francisco Museum of Modern Art. Marla gives us some great detail about her role at SFMOMA, and offers a close-up view of the overall workflow process involved in placing their wonderful art collections online.  In a future post, Marla will tell us more about some of the specific tools and collections at SFMOMA.

Sue: Tell us about your background, and how you came to SFMOMA.

Marla: I have a BA in art history, and a master’s in museum studies. Since moving to California for graduate school, I’ve volunteered and done paid work in museums and other local institutions, including John F. Kennedy University, my alma mater. The San Francisco Museum of Modern Art was the place I always wanted to work. In fact, I applied for the same job in the registration department three years in a row before I was hired. I guess you could say perseverance has always been part of my DNA.

Sue: That perseverance has really paid off!  Could you tell us about your various roles at SFMOMA over the years?

Marla: I’ve worked at SFMOMA for over twenty years in both registration and collection information departments; my current title is Collections Information Manager. I manage our collection management system (CMS), which we use to track, document and manage our collections and works loaned to us. Staff members around the museum contribute to documentation about our collections via the system, starting before objects come in for accession, through their “life” cycle at the museum—from acquisition to object movement, loans, exhibitions, conservation and publications.  The database is our authoritative source for information used by our digital asset management system, our on-line collection and just about any project where object data appears.

We train and support users in many different departments, not just in data entry or technical aspects like report writing, but also in terms of how things are done at SFMOMA and what our protocols are. We work across departments, achieving sensible solutions to issues that come up with a centralized database used by many staff members who have varying and sometimes contrasting concerns.

We act as quality control for our data. We review data for standards we maintain internally as well as external standards, such as the Library of Congress Name Authorities, the Union List of Artist Names, the Thesaurus of Geographic Names and others more specific to media, like photography, or disciplines, like architecture. We’ve set up our data so it complies with the various industry-wide standards that everyone uses, like Dublin Core and others that allow us to seamlessly contribute to federated databases like Artstor. Naturally quality control is present in “Explore Modern Art,” our online collection as well.

Sue: Tell us a bit about your process for putting collections online at SFMOMA, and about other staff that are involved. 

Graciela IturbideLa Nuestra Senora de las Iguanas, Juchitán, Oaxaca,  Mexico (Our Lady of the Iguanas, Juchitán, Oaxaca, Mexico)1979

Our Lady of the Iguanas, Mexico, by Graciela Iturbide. 1979. © Graciela Iturbide, used with permission. http://www.sfmoma.org/explore/collection/artwork/7642

Marla: Before any artwork goes up, it has to go through our quality control process, including additional curatorial review in some cases. We focus on objects that are getting lots of attention at the time, such as things coming in for accessions, and those going out on loan or exhibition. I’m proud to say we have over 11,000 records—about a third of the museum’s collection—online now.

Sometimes an artist whose work we’ve collected will write to us, asking to see that work on the website. We might only have a couple of his or her artworks, representing a very small percentage of our entire collection; but those one or two things give visitors a peek at aspects of a whole life and career. Once the work appears, it’s not only good for us and the artist, but it might inspire visitors to find out more, or explore in a whole new direction. In a way, we’re building a library or directory of artwork that anyone can access. I find that so exciting!

I say “we,” but in reality the CMS team is Documentation Specialist Margaret Kendrick and me. Margaret has been with the museum for over ten years and she is an incredible resource not only for standard practices but she’s also a wonderful institutional memory bank. We work closely together and shared a small office for years. Recently when the museum closed, we moved out and were assigned to different locations so we could support database users wherever they might be. It’s been an adjustment for us not to just blurt out whatever we’re thinking and expect an immediate response!

We work within a terrific department, Collections Information and Access, headed up by Layna White. The three areas of the department are the database, intellectual property and imagery (primarily digital).

So, let’s say we want to put works online by artists in an exhibition we’ll soon be opening across town.  That doesn’t sound very complicated, does it? Let’s take a look at the part of the process that Collections Information and Access goes through, realizing we are only one department, and others have significant roles too.

After the checklist has been released by curatorial staff, Margaret and I review object records to make sure the data is consistent with our archive files, the data meets our standards, is consistent with other works by the same artist, and the artists’ information is accurate. Any further research we do is run by curatorial staff, conducting their own review and research as well.

Sue: Now for the real heart of the matter, the images themselves.  How are they chosen, and what process do you go through to present them online?

Marla: Our visual resources associate, John Morris, looks for images of sufficient quality for the website. Sriba Kwadjovie, our intellectual property associate, checks for permission to use the images online. If we don’t already have permission, she’ll research the copyright owners and contact them.

If we don’t already have images, Susan Backman, our imaging coordinator, works with Katherine Du Tiel, head photographer, and Don Ross, photographer and imaging specialist, to create new ones.  If the work is in storage, where most things are at present, Susan coordinates with art handlers to unpack and install the work in the photo studio before the photographers get to work.

Potrero Hill, by Robert Bechtle. 1996

Potrero Hill, by Robert Bechtle. 1996. © Robert Bechtle, used with permission. http://www.sfmoma.org/explore/collection/artwork/104616

After they perform the image capture, Katherine and Don tweak and process images, getting them ready for presentation to John Morris, who ingests them into the digital asset management system. Once that all has happened, John alerts me, and the signal goes to the database to pick up the image from the image server on the way to Explore Modern Art‘s next update. The whole process, from start to finish, is like choreography—all the players must coordinate with each other, keep the art safe, and get the job done while performing many other duties and working alongside everyone else, who may be on completely different time tables and have completely different priorities.

All of those pieces eventually come together and voila! Artwork mysteriously appears on the website. I know I am just as guilty as the next person when it comes to wishing museums would have all their collections online–if you think about the workload overhead as well as daily workflow, you realize it can take tremendous effort and multiple staff members to get even one image up on a museum website. Think about something enormous, like Anselm Kiefer’s Osiris und Isis, and you’ll see what I mean.

Sue: What about archiving and backup of your collections?

Marla: We back-up all museum systems nightly to tape, and mirror our servers at another museum outside of the region. We also save raw master image files in a sequestered section of the server, and keep corrected master files—the huge high resolution files, corrected, with color bars, in our digital asset management system. We create derivatives on the fly for the particular use we need as we go along.

Sue:  What has been the biggest challenge you face getting digital collections online?

Marla:  Resources. I suppose everyone says the same thing, don’t they? All of us are working on so many different projects simultaneously that it takes real concerted effort, persistence, and project management to get data and images into Explore Modern Art. We don’t publish hundreds of new records with each update, but our totals climb steadily every year, with verified data and excellent images we’re proud to show on the site.

Marla’s notes on the artists and images above:
Graciela Iturbide—living Mexican photographer who shoots beautiful images in black and white. The iguana woman makes me think of a strong lady making her own way in the world, possibly in an unconventional way (or at least, unconventional to me). She also makes me think of the Statue of Liberty.

Robert Bechtle—local San Francisco Bay Area artist, photo-realist painter, self-portrait in his home in SF, near where I did my master’s degree in Museum Studies. We did a retrospective of his work in 2005 that was just wonderful. This kind of looks like my dad, who was also an artist. 

Categories: Planet DigiPres

Evaluation 1 - JPEG2000 validation

OPF Wiki Activity Feed - 2 April 2014 - 8:25am

Page edited by Rune Bruun Ferneke-Nielsen

View Online Rune Bruun Ferneke-Nielsen 2014-04-02T08:25:50Z

Evaluation 1 - JPEG2000 validation

SCAPE Wiki Activity Feed - 2 April 2014 - 8:25am

Page edited by Rune Bruun Ferneke-Nielsen

View Online Rune Bruun Ferneke-Nielsen 2014-04-02T08:25:50Z
Categories: SCAPE

Evaluation 1 - JPEG2000 validation

OPF Wiki Activity Feed - 2 April 2014 - 8:22am

Page edited by Rune Bruun Ferneke-Nielsen

View Online Rune Bruun Ferneke-Nielsen 2014-04-02T08:22:11Z

Evaluation 1 - JPEG2000 validation

SCAPE Wiki Activity Feed - 2 April 2014 - 8:22am

Page edited by Rune Bruun Ferneke-Nielsen

View Online Rune Bruun Ferneke-Nielsen 2014-04-02T08:22:11Z
Categories: SCAPE

Validation of Archival Content Against an Institutional Policy

OPF Wiki Activity Feed - 2 April 2014 - 7:03am

Page edited by Rune Bruun Ferneke-Nielsen

View Online Rune Bruun Ferneke-Nielsen 2014-04-02T07:03:06Z

Validation of Archival Content Against an Institutional Policy

SCAPE Wiki Activity Feed - 2 April 2014 - 7:03am

Page edited by Rune Bruun Ferneke-Nielsen

View Online Rune Bruun Ferneke-Nielsen 2014-04-02T07:03:06Z
Categories: SCAPE

Two or more things that I learn at the "Preserving Your Preservation Tools" workshop

Open Planets Foundation Blogs - 2 April 2014 - 6:37am
These have been two busy days in Den Haag where Carl Wilson from the OPF tries to show us how to use tools in order to have clean environments and well-behave installation procedures that will "always" work.
 The use of vagrant (connected to the appropriate provider, in our experimental case VirtualBox) allows to start from a genuine box and to experiment installation procedures. Everything being scripted or, better said, automatically provisioned allows for repeated tries until we reach an exact clean and complete installation. The important fact is that, once this goal is attained, its sharing is easy by just publishing the steps in a code repository.  The second day was real experiments. We begin by looking at how that has been done for jpylyzer, the indispensable tool to validate JPEG2000 files created by Johan van der Knijff from the Nationale bibliotheek van Nederland, which hosted the event, with the traditional dutch welcoming.Then we begin to look at the old but precious Jhove tool, from Gary McGath, which recently has migrated to GitHub and is actively been transformed to use a maven build process thanks to the efforts of Andrew Jackson and Will Palmer. A first (not so quick but dirty) debian package was obtained at the end of the session providing an automatic installation of this tool for Linux boxes that will take care of installing the script that hides the infamous java idiomatics and of providing a default configuration file so that when you can launch the simple jhove -v just after install it, its works !!! One other thing that attracts my attention was the use of vagrant as a simple way of making sure that every developper works against the same environment so that there is no misconfiguration. In case of need for other tools, an automatic provision can be established and distributed around. Of course, the same process can be applied in production, making sure the deployment is as smooth as possible. So now it appears that it becomes easy to have a base (or reference) environment and the exact list of extra dependencies that allow for a given program to run. From a preservation perspective, this is quite enlightening and is very closed to the work made by the premis group on describing environments. We then can think about transforming the provision script into a premis environment description so that we will have not only an operational way of having emulation but also a standard description of it. The base environment could be collected in a repository and rightly described. The extra steps to make a program revival could then be embeded in the AIP of the program or the datas we try to preserve. Incidentally, at the same time we were working on these virtual environments, Microsoft announces it releases the source code of MS-DOS 2.2. This makes me wondering if we could rebuild a msdos box from scratch and uses it as a base reference environment for all this "old" (some thirty years ago only) programs. All in all, these 2 days went so quickly we just have time for a dutch break along the Plein ; but those were fruitful in giving us the aim to come with more easy to use and better documented tools  that we can rely to build great preservation repositories.

 

Preservation Topics: PackagingSCAPEjpylyzer
Categories: Planet DigiPres

Two or more things that I learn at the "Preserving Your Preservation Tools" workshop

SCAPE Blog Posts - 2 April 2014 - 6:37am
These have been two busy days in Den Haag where Carl Wilson from the OPF tries to show us how to use tools in order to have clean environments and well-behave installation procedures that will "always" work.
 The use of vagrant (connected to the appropriate provider, in our experimental case VirtualBox) allows to start from a genuine box and to experiment installation procedures. Everything being scripted or, better said, automatically provisioned allows for repeated tries until we reach an exact clean and complete installation. The important fact is that, once this goal is attained, its sharing is easy by just publishing the steps in a code repository.  The second day was real experiments. We begin by looking at how that has been done for jpylyzer, the indispensable tool to validate JPEG2000 files created by Johan van der Knijff from the Nationale bibliotheek van Nederland, which hosted the event, with the traditional dutch welcoming.Then we begin to look at the old but precious Jhove tool, from Gary McGath, which recently has migrated to GitHub and is actively been transformed to use a maven build process thanks to the efforts of Andrew Jackson and Will Palmer. A first (not so quick but dirty) debian package was obtained at the end of the session providing an automatic installation of this tool for Linux boxes that will take care of installing the script that hides the infamous java idiomatics and of providing a default configuration file so that when you can launch the simple jhove -v just after install it, its works !!! One other thing that attracts my attention was the use of vagrant as a simple way of making sure that every developper works against the same environment so that there is no misconfiguration. In case of need for other tools, an automatic provision can be established and distributed around. Of course, the same process can be applied in production, making sure the deployment is as smooth as possible. So now it appears that it becomes easy to have a base (or reference) environment and the exact list of extra dependencies that allow for a given program to run. From a preservation perspective, this is quite enlightening and is very closed to the work made by the premis group on describing environments. We then can think about transforming the provision script into a premis environment description so that we will have not only an operational way of having emulation but also a standard description of it. The base environment could be collected in a repository and rightly described. The extra steps to make a program revival could then be embeded in the AIP of the program or the datas we try to preserve. Incidentally, at the same time we were working on these virtual environments, Microsoft announces it releases the source code of MS-DOS 2.2. This makes me wondering if we could rebuild a msdos box from scratch and uses it as a base reference environment for all this "old" (some thirty years ago only) programs. All in all, these 2 days went so quickly we just have time for a dutch break along the Plein ; but those were fruitful in giving us the aim to come with more easy to use and better documented tools  that we can rely to build great preservation repositories.

 

Preservation Topics: PackagingSCAPEjpylyzer
Categories: SCAPE

New Workflow for Viewshare

The Signal: Digital Preservation - 1 April 2014 - 7:00pm

In previous posts Trevor Owens and I showcased changes that are coming to the Viewshare platform. You can test out these changes over the next couple of months at http://beta.viewshare.org (your current login credentials will work). In addition to improved functionality and usability of the views, the workflow to load and build views has also been updated. In this post I’ll detail these workflow improvements.

Condensed Workflow

In the current version of Viewshare the workflow is broken into steps where users load data, edit data, build views, and then publish and share them (or keep them private). The data and views are separated into separate workflows. In response to user feedback, building and editing views are now done in a single, iterative workflow. After login, to get started simply click “Create View” and you’ll be taken to a preview of what your data would look like if you didn’t make any changes. If you’re a Viewshare super-user this could save some time and get you right to publishing your view but chances are you’ll want edit data and customize your view.

Start by creating a view.

Start by creating a view.


 

 

 

 

  

 

 

 

 

Edit your Data

From the “Edit” button users can define data types and augment data to build specialized views. The new process is streamlined and messages cues are improved. As in the original Viewshare platform place information, date information and text information can be transformed into the lat/long, ISO date, and list array data standards that build the interactive views.

DataAugmentation

Transform place information into lat/longs to build a map view.


 

 

 

 

 

  

 
 

 

 

 

Build & Preview Views

The new workflow allows users to simultaneously build and preview views. User can choose the page layout by selecting which widgets or facets to use on the build screen and select views to build and edit.

Build a view.

Build a map view.

 

 

 

 

 

  

  
  
 
 
 

   

  

 

Edit Data Again

Often in the process of building and refining a view you will find something you’ll want to change in the data presentation. The real advantage of the new workflow is being able to toggle back and forth between building views and editing data.

Edit data.

Indicate the data is a URL.


 

 

 

  

 

 

 

  

  

  
 
 
 
 
 

 

Add More Views

Because it is so easy to move between editing data and building views users can iteratively create maps, timelines, tables, charts, graphs, and photo, audio and video galleries.

Add more views

Add a photo gallery view.


 

 

 

 

 

  

  

 

 

 

 

 

Save, Share, Embed

When views are ready to share there are lots of options. Users can save their views as public on Viewshare.org. It’s also possible to share views with just a few people by sharing a private link. And, as always views can be embedded into any website using the embed code.

Share views privately with a link.

Share views with a private link.


 

 

 

 

  

 

 

  

 

 

 

We hope that these changes make your experience with Viewshare easier and more useful. As always we welcome comments and questions in the comments section of this post or in the Viewshare user feedback forum.

Categories: Planet DigiPres

Validation of Archival Content Against an Institutional Policy

OPF Wiki Activity Feed - 1 April 2014 - 11:59am

Page edited by Rune Bruun Ferneke-Nielsen

View Online Rune Bruun Ferneke-Nielsen 2014-04-01T11:59:52Z

Validation of Archival Content Against an Institutional Policy

SCAPE Wiki Activity Feed - 1 April 2014 - 11:59am

Page edited by Rune Bruun Ferneke-Nielsen

View Online Rune Bruun Ferneke-Nielsen 2014-04-01T11:59:52Z
Categories: SCAPE

Validate JPEG2000 Newspapers Using Jpylyzer

OPF Wiki Activity Feed - 1 April 2014 - 11:59am

Page edited by Rune Bruun Ferneke-Nielsen

View Online Rune Bruun Ferneke-Nielsen 2014-04-01T11:59:08Z

Validate JPEG2000 Newspapers Using Jpylyzer

SCAPE Wiki Activity Feed - 1 April 2014 - 11:59am

Page edited by Rune Bruun Ferneke-Nielsen

View Online Rune Bruun Ferneke-Nielsen 2014-04-01T11:59:08Z
Categories: SCAPE

Experiment Overview

OPF Wiki Activity Feed - 1 April 2014 - 10:37am

Page edited by Rune Bruun Ferneke-Nielsen

View Online Rune Bruun Ferneke-Nielsen 2014-04-01T10:37:52Z

Experiment Overview

SCAPE Wiki Activity Feed - 1 April 2014 - 10:37am

Page edited by Rune Bruun Ferneke-Nielsen

View Online Rune Bruun Ferneke-Nielsen 2014-04-01T10:37:52Z
Categories: SCAPE

Final Developer Workshop Ideas

OPF Wiki Activity Feed - 1 April 2014 - 5:59am

Page edited by Matthias Rella

View Online Matthias Rella 2014-04-01T05:59:36Z

Final Developer Workshop Ideas

SCAPE Wiki Activity Feed - 1 April 2014 - 5:59am

Page edited by Matthias Rella

View Online Matthias Rella 2014-04-01T05:59:36Z
Categories: SCAPE

Portability of SCAPE Platform over Multiple Cloud Environments

OPF Wiki Activity Feed - 31 March 2014 - 10:46pm

Page edited by Daniel Pop

View Online Daniel Pop 2014-03-31T22:46:57Z

Portability of SCAPE Platform over Multiple Cloud Environments

SCAPE Wiki Activity Feed - 31 March 2014 - 10:46pm

Page edited by Daniel Pop

View Online Daniel Pop 2014-03-31T22:46:57Z
Categories: SCAPE