This project has a number of goals, including improving support for large datasets in the bioinformatics community, furthering the development of semantic web technologies, and supporting data sharing and reproducible science. Today I’m going to go into a little more detail about the former and talk about the work I’ve done so far on it.
To get up to speed on all of the interrelated Semantic Web standards and technologies, I have been working on a tool for converting objects from the R statistical computing language into RDF triples, the native format of the Semantic Web. Although this will be a valuable tool on its own, it is also being developed to support the next version of R/qtl, a library developed by Karl Broman and Hao Wu, as well as a host of other contributors, which offers functions for doing Quantitative Trait Loci mapping using R. The next incarnation of R/qtl will focus on support for highly parallelized computation, a key component of which will be storing results in a database that can be queried and manipulated remotely, as opposed to keeping huge data sets in memory on one computer.
Data Sharing and Reproducibility
The other advantage of storing statistical data in triple based format is that it can be easily, even automatically, published for others to download, inspect, and interact with. In RDF, every property of a resource is defined by its relation to other resources or objects, and each relation comes with an attached definition that either a machine or a human can access to get further details about it. This allows for a huge amount of flexibility in data types and storage schema, as well as the application of algorithmic reasoning techniques to simplify a data set or find out more about its implications.
Publishing scientific data in a machine readable format also makes it dramatically more for scientists hoping to replicate the results or build upon them. Most publications will, at best, include supporting data as a table or tables of statistical aggregates, and even when lower level raw data are available it is usually stored in a flat format such as csv, which includes little to no semantic content such as the units or attributes of objects, or their meaning in the context of the rest of the data. While some fields have begun using relational database technology more extensively, the fact that the most popular data storage formats for many researchers are essentially text files speaks to the rigidity and extra complications of using dedicated database systems.
RDF has a somewhat mind bending structure of its own to understand, and it’s certainly no silver bullet for the problem of generalized data storage, but its flexibility allows it to overcome much of the ossification and user-unfriendliness of trying to use relation databases for storing and publishing scientific results. A primary goal of my work this summer will be to extend existing Ruby tools and build new ones to help make Semantic data storage accessible and simple for researchers, supporting the crucial work of examining, validating, and extending published results.
The Semantic Web is built on three core technologies; the RDF protocol, the SPARQL query language, and the OWL web ontology language. There is a large body of documentation about all three of these tools, which can be found around the web or in the W3C’s standards documentation. I may write some posts going into more detail, but for now, a brief overview:
RDF essentially comes down to describing data using the ‘Subject Predicate Object’ format, creating statements known as ‘Triples’. An example would be “John(subject) knows(predicate) Mary(object)”. With a few exceptions, each component of this triple is given a URI, which looks like a lot like a regular URL and can be used to uniquely identify a resource (such as ‘John’ or ‘Mary’), or a relation (such as ‘friends with’), as well as providing a link where more information about the object or relation may be found. You can think of subjects and objects as nodes, and predicates as a lines connecting them. Although this doesn’t mean much for one statement such as “John knows Mary”, a collection of similar statements define a big directed graph of interconnected nodes, where every connection is labeled with its meaning.
SPARQL is the official query language of the Semantic Web. It can be used to select elements of an RDF graph based on their subject, predicate, or object, elements. Its syntax resembles, superficially at least, the SQL language familiar to many database users, but in practice it functions quite differently. However, many of the advanced operations of SQL, such as pivoting, are still supported.
OWL is a language for describing ontologies, which are used to formally represent the concepts your data represent in an RDF store, allowing simpler machine interpretation. The technology is crucial to the interconnectedness of the Semantic Web, as it is the means by which the relations between disparate resources can be automatically discovered, allowing relatively easy integration of new or existing data.
There is, of course, much more to each of these technologies, and the technicalities, use cases, and extensions of each of them, then would fit in one section of the blog post, but hopefully this gives a broad overview of how the project will work.
A Starting Place
To begin with, I have developed a Ruby based tool to automatically convert data frame objects in R to RDF, using the Data Cube vocabulary, which was developed as a generalizable way of representing multidimensional data. It can be run easily on any Ruby capable machine, but since the script and all of its dependencies are pure Ruby libraries, I was also able to deploy it as an executable jar with warbler, so anyone with java installed can use it without having to download any dependencies. I’ll go into more detail about how this works, why its useful, and where this part of the project is headed, but I’ve decided to break it into a separate blog post.