I’ll Take That to Go: Big Data Bags and Minimal Identifiers for Exchange of Large, Complex Datasets

Big data workflows often require the assembly and exchange of complex, multi-element datasets. For example, in biomedical applications, the input to an analytic pipeline can be a dataset consisting thousands of images and genome sequences assembled from diverse repositories, requiring a description of the contents of the dataset in a concise and unambiguous form. Typical approaches to creating datasets for big data workflows assume that all data reside in a single location, requiring costly data marshalling and permitting errors of omission and commission because dataset members are not explicitly specified. We address these issues by proposing simple methods and tools for assembling, sharing, and analyzing large and complex datasets that scientists can easily integrate into their daily workflows. These tools combine a simple and robust method for describing data collections (BDBags), data descriptions (Research Objects), and simple persistent identifiers (Minids) to create a powerful ecosystem of tools and
services for big data analysis and sharing. We present these tools and use biomedical case studies to illustrate their use for the rapid assembly, sharing, and analysis of large datasets.
Many domains of science must frequently manage, share, and analyze complex, large data collections—what we call
datasets in this paper—that comprise many, often large, files of different types. Biomedical applications are a case in
point [1]. For example, an imaging genetics study may encompass thousands of high-resolution 3D images, terabytes
in size; whole genome sequences, each tens of gigabytes in size; and other heterogeneous clinical data.
Data management and analysis tools typically assume that the data are assembled in one location. However, this
assumption is often not well founded in the case of big data. Due to their size, complexity and diverse methods
of production, files in a dataset may be distributed across multiple storage systems: for example, multiple imaging and genomics repositories. As a result, apparently routine data manipulation workflows become rife with mundane complexities as researchers struggle to assemble large, complex datasets; position data for access to analytic tools; document and disseminate large output datasets; and track the inputs and outputs to analyses for purpose of reproducibility [2].

 

SOURCE: https://static.aminer.org/pdf/fa/bigdata2016/BigD418.pdf