I hadn’t really looked in on the blogs today, but I did take a peek at Twitter, and I got the impression from some of my trusted network of experts that @psychemedia had gone and done something that had gotten them rather excited…
So, I head over and start following the steps in Tony Hirst’s blog, learning that someone can go from this list of statistics in Wikipedia:
As Tony sums on it up: “we have scraped some data from a wikipedia page into a Google spreadsheet using the =importHTML formula, published a handful of rows from the table as CSV, consumed the CSV in a Yahoo pipe and created a geocoded KML feed from it, and then displayed it in a Yahoo map.” (It seems a lot less intimidating to the data illiterate when you read Tony’s full post.)
Or, to add my own feeble learning here on the sidelines:
* The tools for scraping, manipulating and re-presenting data keep getting easier to use (hell, even I more or less understand the steps Tony lays out here), and you really can do a great deal with free, web-based tools without writing code.
* More and more, I’m starting to think that in addition to being a model for massive scale knowledge building, and an indispensable reference source, that one of Wikipedia’s key contributions to the web will prove to be providing raw material for a range of data mashups such as this one.
* This is what data literacy looks like. To extend the analogy, I’m a first grader right now — I can make out the letters, and sound out the simple words… but the ability to confidently read and write in this form still seems like a form of magic.
Finally, I wasn’t the first to shout it from the rooftops, but let me be the latest: ALL HAIL TONY HIRST!