Rise of the Machines: Using SADI to Augment Your Data

As discussed in my last post, converting a flat dataset to an RDF graph can give you access to a variety analysis and exposure tools, or form the base of new software that relies on your data format. Its is a flexible format, in that new information can be added in a variety of formats without concern for table joins or changing schema, and yet it has incredible descriptive power, because it is possible in principle to know that two statements in separate data stores are equivalent information, and because individual data elements can often simply be loaded in a web browser to find out more information.

These properties certainly have a lot to offer scientists and other human consumers of data, but in many ways they are a particularly good feature for algorithmic data users, which can process a stream of information very quickly but are not as adept at resolving ambiguity and reliably finding more information on a concept without guidance on where to look.

SADI: Semantic Automated Discovery and Integration

One particular project that illustrates this is SADI, the brainchild of bioinformatician and semantic web expert Dr Mark Wilkinson. Mark, who also happens to be one of my GSOC mentors, has built a framework to support automated discovery of and access to distributed datasets and services, which is a practical example of a service built using the concepts I’ve been working to learn and make use of this summer.

SADI is not so much a tool or piece of software as a set of standards for service interoperability, grounded in and supported by the existing standards of the internet and the semantic web. This means that, although most of the existing SADI services are focused on bioinformatics data, the system is flexible and general purpose enough to apply to essentially any service or web interface.

SADI is comprised of six key conventions, available at the How SADI works page:

  1. SADI Services consume and provide data via simple HTTP POST and GET.
  2. SADI Services consume and produce data in RDF format. This allows SADI Services to exploit existing OWL reasoners and SPARQL query engines to enhance interoperability between Services and the interpretation of the data being passed between them.
  3. Service interfaces (i.e., Inputs and Outputs) are defined in terms of OWL-DL classes; the property restrictions on these OWL classes define what specific data elements are required by the Service and what data will be provided by the Service, respectively.
  4. Input RDF data – data that is compliant with the Input OWL Class – is “decorated” or “annotated” by the service provider to include new properties. These properties will (of course) be a function of the lookup/analytical operations performed by the Web Service.
  5. Importantly, discovery of SADI Services can include searches for the properties the user wants to add to their data. This contrasts with other Semantic Web Service standards which attempt only to define the computational process by which input data is analysed, rather than the properties that process generates between the input and output data. This is KEY to the semantic behaviours of SADI.
  6. SADI Web Services are stateless and atomic.

Essentially, a SADI service uses the OWL ontology language to describe the information it expects as input, and what it will return. It uses common internet conventions for its communication protocol, based on recognized W3C standards.

These conventions allow all of the same web tools and agents that access the world wide web to interact with SADI in a smilar manner. Because of this and the use of semantic web standards to represent information in the system, a database of which services provide what sorts of information, and for which inputs, has been constructed that can take an entry straight from a triplified dataset and discover more information about it, all without any user intervention.

The basic process of using a SADI service involves retrieving information about a service by issuing a GET request to it, then sending it OWL classes as input based on what it expects. The service will then respond using the same class/es, but annotated with the new information it provides.

As an example, here are the request headers and a couple of input and output objects for the example service, which sends sends back a greeting for each “named individual” it receives (how nice!)

and the response

All of this behavior is defined by the OWL classes in the service description, so any user or algorithm acessing the service can learn how to handle interacting with the service simply by reading the description, which is both human and machine friendly.

Asynchronous Responses

Beyond the clarity and simplicity of the RDF and the reuse of familiar web standards, much of the SADI’s strength lies in its ability to robustly process large requests and inputs. Services can be built set up as synchronous, where a response isn’t returned until the service has finished processing its input, or asynchronous, where the server instead returns an address that a client can check, or “poll” to see if the operation has finished.

The response to a poll request also includes a header specifying how long the client should wait before trying again, making the whole process of retrieving an asynchronous result simple and transparent to coordinate. As we’ll see later on, this makes batch processing much simpler, allowing large volumes of information to be exchanged without having to worry about dealing with timeouts and network issues, or trying to efficiently coordinate many different remote requests.

The SADI framework has other benefits and features that we haven’t specifically used yet, such as the security afforded by its enforcement of an object model (as opposed to raw SPARQL queries), and the ability to distribute queries over multiple resources. In addition to sadiframework.org, further details can be found in The Semantic Automated Discovery and Integration (SADI) Web service Design-Pattern, API and Reference Implementation, a paper published in the Journal of Biomedical Semantics by Mark, Benjamin Vandervalk and Luke McCarthy, and available in full at the link.

SADI in action

To give a concrete example of how to use SADI, I’ll go over the script I wrote which uses it to assist in our analysis of the MAF dataset we’ve been working with. When trying to make inferences based on the frequency with which mutations appear in a gene, it is necessary to adjust for the size of that gene. The location of a gene can actually be a bit of a fuzzy concept, since the very concept of what, exactly, makes for a gene can itself be less clear-cut than you might expect, but databases exist that contain the generally accepted start and end positions of the gene, from which its length can be found.

The old way

To get started, I searched for databases that contained gene location information and allowed it to be accessed programmatically. Of these, the Ensembl genome database was the easiest for me to use, as it has a new RESTful endpoint, and I’m usually happiest working with REST services.

The first step in construction a query to it was to find the canonical name for a HUGO symbol from the dataset. Unfortunately, in addition to the occasional error or nonsense entry, the gene information for the MAF dataset often used synonyms for the “official” gene name, which are recognized by the HGNC, but not immediately convertible to their equivalent Ensembl ID. To deal with this I used the hgnc dataset provided by bio2rdf to look up first the official symbol, and then the symbol’s ID in the Ensembl database.

In the end, I came up with a couple of methods to retrieve the information.

This required multiple queries and was both error prone and slow, since each lookup involved multiple remote queries and had to be completed one at a time. Even worse, the results weren’t stored anywhere so a new request had to be made each time the information was required.

SADI to the Rescue

Mark, however, was kind enough to set up a SADI service to handle the process, which is great, since it runs a lot more smoothly and gave me the opportunity to work some with SADI, but it also makes saving and integrating the responses almost trivial.

To begin with, I created a class with a simple method to run a synchronous SADI request and return the results as an RDF graph:

It takes a service, and RDF input, then uses the rest-client gem to handle the request. SADI supports both turtle and RDF/XML input, but I’m partial to turtle so the script uses it for input. An example, for the gene “ACF”, would be

producing the output

Some of the supporting ontologies’ predicates have been replaced by more readable forms, but in general this is fully valid RDF on both ends, so loading it to or from a triple store is no trouble at all.

Caution: Semantic Hazard

Its important to note that the some work still needs to be done on the data model for the triplified MAF dataset before it will play nice with other scientific datasets such as those exposed by SADI. Mark was willing to set up a service which could accommodate the dataset I’d constructed (SADI is quite flexible after all), but this shouldn’t be taken as a representative example of how a service should look or be used. Although my gem has gotten to the point where it avoids gross incompatibilities of stepping on others’ name spaces and failing to reuse common vocabularies, there are some subtler semantic issues prevent simple integration with SADIs more interesting functions.

First of all, SADI makes frequent use of the SIO ontology, which provides a rich and unified system of describing data using RDF at the cost of certain restrictions on how that data is represented. You can see the general outline of how SIO works in the output above; attributes of objects are attached with the “has_attribute” predicate, and literal values for attributes using “has_value”. I spent some time trying to use this pattern in the MAF parser, but we decided to just use the simple representation I described in earlier blog posts given the amount of time we had left. I believe full use of SIO would be both possible and worthwhile though, since it allows for much greater interoperability without sacrificing flexibility, so this will be something I continue working on past the end of GSOC.

Second, there are also some “philosophical” issues getting in the way of full SADI integration. I’m current using the URIs from identifiers.org to provide dereferancable identifiers for the HUGO symbols in the MAF file. This is a great application of linked data principles, since it automatically attaches both more information about a particular gene, and about the service and scheme used to represent it. However, a statement like “http://identifiers.org/hgnc.symbol/RBFOX1 has_gene_length 1694246” doesn’t really make sense; the identifiers.org url for RBFOX1 doesn’t have a gene length because its just an identifier! As I understand it, right structure would be more like “X is_a gene, X has_identifier identifiers.org/X , X has_gene_length Y”, although I could still be wrong about this; getting the semantics right is one of the trickiest parts of working with these systems.

If you’re like me, you love the idea of a database technology where the ontological characteristics of entities stored in it are as important as the raw data itself. But if you think this seems like splitting hairs you’re missing the scope of the vision the Semantic Web is working towards, which is demonstrated by SADI; once we make a statement about something, that statement should be unambiguously defined and verifiable to someone or some algorithm with knowledge of its particular domain, and by making other statements using its component parts, we can build a vast web of interlinked knowledge, perhaps one day supplanting the web of linked documents we all use today.

RDF’s flexibility supports this vision, but it also gets in the way in that it allows you to make nonsensical statements such as the ones above. Formally defined ontologies like SIO provide the more precise structure that allows you to make a statement with reasonable confidence that it will be both meaningful and easily reusable by others. In my own time after this summer I’m looking forward to working on and writing more about this topic, as I think it really gets at the potential for using semantic technologies in science, programming, and machine intelligence research.

Speeding things up with Async

A single request runs fairly quickly and retrieves the information we need, but in this simple form it’s not quite sufficient for larger volumes of information. The nature of RDF makes batch input very easy to set up; you just have to add more objects to turtle input. However, the BRCA dataset has 1,760 distinct genes, so even trying to load a small subset of them through the synchronous service takes long enough to cause the request to time out.

This is precisely what the asynchronous mode is meant for, so after getting the basic synchronous query up and running I moved on to that. Asynchronous queries have some added complexity, so the class got quite a bit longer, but it’s still a fairly simple process for all the work that’s going on behind the scenes

When the fetch_async method POSTs data to the service, it receives a url to poll for each of the inputs. The method creates a list of poll urls, then handles the process of going through and waiting until a response is available. This means there is no need to keep a connection open the whole time, and the client can just follow the instructions from the service on how long to wait before checking back in. If the responses are split over multiple polling URLs, it waits until each has finished processing, then returns the output, again in turtle form.

Straight to the Database

At first after getting this working I immediately set to parsing out just the gene lengths from the output, so I could use return them as Ruby objects. This habit comes from my previous experience using various APIs, where the general process involves parsing your data into a special format, making the request, and then grabbing the information you want from the response. SADI eliminates the last of these, and with the right input structures the first as well; the response is already in an RDF format, so you can simply load it straight into a triple store, automatically augmenting the information you already have and providing an offline database of gene lengths for later lookup.

I’ve written a script to do just this, which currently is configured to work with fourstore specifically. It retrieves the hugo genes currently in the database, sets up the SADI input with them, and can load the output directly into the triple store. The requests are split into batches of 250, which makes the set can be processed a lot faster than doing them one at a time, and this way its a one time process, instead of something that gets repeated every time to access the length of a particular gene.

When you step back and think about it, this ability to make a request using some entries from your database and be able to load the response straight back in without parsing or conversion is a pretty remarkable, and it doesn’t even begin to address SADIs support for discovering entirely new information. While this post should serve as a small example of what it can do, there is a huge list of available services on the SADI site. And if you’re looking for a simple ruby client for accessing a service, try out the code in the gists above, or clone the sinatra-based web interface I built.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s