PubliSci as a service

The provenance DSL and DataSet generation I’ve been working on have most of their basic functionality in place, but I also planned on creating a web-based API for accessing utilizing the gem and building services on top of it. I’ve created a demo site as a prototype for this feature using Ruby On Rails, and I’m happy enough with it that I’d like to make the address public so people can poke around and give me feedback. Although eventually I’ll be separating some of this functionality into a lighter weight server, Rails has helped immensely in developing it, both because it naturally encourages good RESTful design, and the Ruby community has created many useful gems and tools for rapid prototyping of websites using the framework. You can find the demo site is up at http://50.116.40.22:3000, or you can take a look at the source on Github.

REST in a nutshell. Think putting the nouns in the URL and the verbs in the request type (source)

The server acts as an interface to most of the basic functions of the gem; the DSL, the dataset RDFization classes, and the Triplestore convenience methods. Furthermore, this functionality is accessible either through an HTML interface (with a pleasant bootstrap theme!), or programmatically as a (mostly) RESTful web service, using javascript and JSON.

I’m planning to write a tutorial on how to create a publication with it, but for this post I’ll just give a broad overview of how you can use the service. The example data on the site now is based on the PROV primer, with a couple of other elements added to test different features, so it may seem a bit contrived, but it should give you some idea how you could use the site’s various features.

The root page of the site will show you the DSL script that was used to initialize the site, with syntax highlighting thanks to the lovely Coderay gem.

dsl_show

You can also edit the DSL script, which will regenerate the underlying data and set up a new repository object for you. As a warning up front, the DSL is currently based on instance_eval, which introduces a big security risk if not handled properly. I’m working on automatically sandboxing the evaluation in a future version, but for now if you’re worried about security you can easily change a line of the initializer disable remote users updating the DSL.

Along the top, you’ll see links for Entities, Activites, and Agents, which are elements of the Prov ontology, as well as Datasets, which represent any Data Cube formatted data stored in the repository. Each of these elements acts as a RESTful resource, which can be created/read/updated/deleted in much the same way as with a standard ActiveRecord model. Let’s take a look at the Entities page to see how this works.

entities

On the Entities page, you can see a table where each row represents an entity. Prov relevant properties and relationships are also displayed and hyperlinked, allowing you to browse through the information using the familiar web idiom of linked pages. All of this is done using SPARQL queries behind the scenes, but the user doesn’t (immediately) need to know about this to use the service.

Clicking the “subject” field for each Entity will take you to a page with more details, as well as a link to the corresponding DataSet if it exists.

entity_show

From that page, you can export the DataSet using the writers in the bio-publisci gem. At the moment, the demo site can export csv or wekka arff data, but I’ve been working recently on streamlining the writer classes, and I’ll be adding writers for R and some of the common SciRuby tools before the end of the summer.

You can also edit Entities, Agents, and Activities, or create new ones in case you want to correct a mistake or add information to the graph. This is a tiny bit wonky in some spots, both because of the impedance between RDF and the standard tabular backend Rails applications generally use, and because I’m by no means a Rails expert, but you can edit most of the fields and the creation of new resources generally works fine.

edit_entity

I think being able to browse the provenance graph of a dataset or piece of research using an intuitive browser-based interface forms a useful bridge between the constraints and simplicity of a standard CRUD-ish website and the powerful but daunting complexity of RDF and SPARQL, but if you find parts of this model too constraining you can also query the repository directly using SPARQL:

query

Two things to note about this; first, this provides a single interface to run queries on any of the repositories supported by the ruby-rdf project, even some without a built-in SPARQL endpoint. Second, since I’ve set up the Rails server with a permissive CORS policy, you can run queries or access resources across domains, allowing you to easily integrate it into an AJAX application or just about anywhere else. For an example, have a look at this jsfiddle that creates a bar chart in d3 from one of the datasets on the demo site.

A few other features have been implemented I’ll wait to detail in a later post that may come in handy. One that may be useful to some people is the ability to dump the entire contents of the repository in turtle rdf form. If you wanted to make a complete copy of the sites repository, or save changes you’d made in a serialized format for later, it’s as easy as calling the repository/dump route. the dataset dsl will automatically download and handle remote files specified by their url’s using the ‘object’ keyword, which makes loading external datasets extremely simple,

There’s a fair bit more to do to make this a fully featured web service; some elements of the Prov vocabulary are not fully represented, I’d really like the separate out fundamental parts into a more lightweight and deployable Sinatra server, and raw (non-rdf) datasets need better handling. Additionally, while you can easily switch between an in-memory repository for experimentation and more dedicated software such as 4store for real work, it’d be nice to make the two work together, so you could have an in-memory workspace, then save your changes to the more permanent store when your were ready. Aside from the performance gain due to not having to wait for queries on a large repository, this helps with deleting resources or clearing your workspace, as the methods of deleting data from triple stores are somewhat inconsistent and underdeveloped across different software.

Some of the cooler but less important features will have to wait until after GSOC, but if this is a tool you might use and there’s a particular feature you think would be important to have included in the basic version by the end of the summer please get in touch!

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s