Sunday 22 January 2012

Get Real Data from the Semantic Web - Finding Resources

In my last article, I briefly explained how to get data from a resource using python and SPARQL. This article explains how to find the resource in the first place.
Have you ever been taught how to knit? I you have, then you'll know that you are not usually taught how to cast on (or start off) on your first lesson. That's because it much easier to learn how to knit than it is to cast on.

So it is with the Semantic Web. Once you have a resource URL, it's reasonably easy to extract information linked to that resource, but finding the starting resource is a bit trickier.
So let's just recap how we might get the abstract description for London from DBpedia.

If we know the URL then that's pretty straight forward:
(If you want to follow this tutorial, then you had better copy the sparql.py file from there.)


RDF types for the DBpedia entry for London
If you don't however, then you'll have to search for it. According to the dbpedia entry, London is many things, including a owl:Thing, there are a lot of Things out there, probably enough to make even the DBpdia  endpoint time out, so let's choose something more restrictive such as yago:Locations but not too restrictive, for example yago:BritishCapitals.


Just to be a smart ass as I finish off, you can get both at the same time by doing this, but don't forget that doing this will stress the SPARQL endpoint more than is probably necessary. Be kind.

Thursday 19 January 2012

Get Real Data from the Semantic Web

Semantic Web this, Semantic Web that, what actual use is the Semantic Web in the real world? I mean how can you actually use it?

If you haven't heard the term "Semantic Web" over the last couple of years then you must have been in... well somewhere without this interweb they're all talking about.

Basically, by using metadata (see RDF), disparate bits of data floating around the web can be joined up. In otherwords they stop being disparate. Better than that, theoretically you can query the connections between the data and get lots of lovely information back. This last bit is done via SPARQL, and yes, the QL does stand for Query Language.

I say theoretically because in reality it's a bit of a pain. I may be an intelligent agent capable of finding linked bits of data through the web, but how exactly would you do that in python.

It is possible to use rdflib to find information, but it's very long winded. It's much easier to use SPARQLWrapper andin fact in the simple example below, I've used a SPARQLWrapperWrapper to make asking for lots of similarly sourced data, in this case DBPedia, even easier.

To use this try importing the DBpediaEndpoint and feeding it some SPARQL:

Your homework is - How do you identify the resource_uri in the first place?

That's for another evening.

Tuesday 17 January 2012

Github: Who needs it?

Do you ever think that you just don't want all your code on Github? I mean it's only a quick hack right?

Truth is, once you start using git you probably use it automatically for all your code, but you don't always want all your code floating around the net. What about those hard-coded email addresses and API tokens, or those references to your private net servers?

The answer is probably so simple that you have just overlooked it. You don't need to set up a local git server or hire one from Amazon. All you need to do is use DropBox or Ubuntu One as your remote origin repository.

Here's how, using Ubuntu One on Ubuntu:

Write a short shell script something like this and save it on your path as repo.sh.

Now when you want to create a new repository all you have to do is:

If you use Python and virtualenv you may be interested in the slightly extended script at http://pythonic-apis.blogspot.com/2012/01/using-ubuntu-one-as-git-repository.html.