[OpenSPIM] storage strategies for SPIM datasets?
Maarten Hilbrant
m.hilbrant at uni-koeln.de
Wed Jun 17 07:47:47 CDT 2015
Dear all,
our SPIM (which in fact is an mDSLM) is up and running -wonderful.
This also means that we're producing a lot of data very quickly (typically about 500GB per time lapse recording, about one recording per week). In an attempt to streamline our data analysis and storage workflow, I wondered what your experiences are.
I currently just dump our data on our two 13.5TB NAS servers (a second set of NAS servers is used as a backup). Whenever I want to analyse a dataset, I first copy it to the 5TB internal hd of a reasonably powerful workstation, as reading/writing is often the limiting step for most analysis procedures. After cropping, making Z-projections, 3D registration/fusion etc I copy the results back to subfolders of the original data on the NAS server, creating an ever-growing swamp of data. Until I've finally convinced myself that it's ok to delete the raw data:-)
This "workflow" has worked for me reasonably well, but it is rather difficult to keep an overview of all the data and I just know I'm occupying much more storage space than strictly necessary to answer our biological questions. So I'm investigating the possibilities for setting up a relational database to at least keep track of all the metadata, analyses performed etc. Ideally, such a database would include thumbnails of the imaging data and allow for easy import of metadata. Any ideas? Of course not very "open", but I'm tempted to use Filemaker Pro. Anything else seems like re-inventing the OMERO wheel. Does any one have experience with using OMERO for storing all data from large SPIM datasets? Or would you just store projections and keep the raw data somewhere else? Any ideas appreciated!
cheers,
Maarten
More information about the OpenSPIM
mailing list