January 17, 2010 By Scott Brinker 4 Comments
The 8th linked data business model
In response to my post on linked data business models, Leigh Dodds at Talis wrote a terrific piece with his thoughts on the business of linked data. Leigh presents a number of great ideas that I think really carry the conversation forward.
One of his points is that I overlooked an important model, what he calls the “sponsorship model.” Under this model, a government entity or a non-profit organization has a funded mandate to deliver certain data to the public or their targeted constituency. I’d humbly suggest calling it the subsidized model though, to avoid confusion, because sponsorship is often associated with advertising and branding — very different business models. ·
I’ve organized these by how revenue is generated, from direct money-for-data to indirect branding programs.
Within each of these revenue models, there’s also a secondary dimension of how the data is delivered, whether in raw form for others to leverage in their own applications or embedded into a pre-packaged application provided directly to end-users.
1. Subscription model. Some data will be valuable enough that you can charge people a subscription to access it. This model has been around for a while, but it will gain new life as linked data standards make it easier for people to consume and mash-up data in novel applications.
2. Advertising model. Advertising: the second oldest profession. Data-driven applications will have plenty of opportunity for contextual ads and sponsorships. One interesting twist will be advertisers who pay to include information in raw data feeds, data-layer ads if you will.
3. Authority model. If anyone can publish data on the web, how will you know what data is good? That problem will be an opportunity for third-party “authorities” to validate data — or do official reviews and certifications that are published as data — and charge for participation. Compliance services are related to this.
4. Affiliate model. Affiliate marketing programs generate over $6 billion/year in commissions and are a major source of transactions and leads for merchants such as Amazon.com. Embedding affiliate links in data, so that they are activated when surfaced into end-user applications, are a natural extension of this existing model.
5. Value-Add model. Useful data can be bundled with other services to make the overall solution more valuable. For example, think of the benchmarking data now included with Google Analytics. Access to data can also be offered earlier in the sales funnel, as a lead generation incentive.
6. Traffic model. As with Google Rich Snippets, data can be used to boost the visibility and ranking of sites in major and vertical search engines. This is data-enhanced search engine optimization (SEO++) to increase traffic. Nickname: the “data for nothing and links for free” model (apologies to Mark Knopfler).
7. Branding model. As Josh Jones-Dilworth said, “Data shapes conversations and markets.” Data branding can use data — and the vocabularies that define and structure data — to position and promote a company’s worldview and differentiation strategy.
Of course, there will be hybrid models that combine several of these approaches.
Particularly in the early days, most organizations will benefit from experimenting with linked data for traffic, branding, and a little value add. Their own value will be learning as much as anything. As the data web matures, and they become more experienced, they may embrace more direct revenue models.
But don’t underestimate the importance of data branding. When it comes to establishing industry standard vocabularies and ontologies, there is a definite first-mover advantage.
For the entrepreneurs in this space, however, everything is fair game. ·
Linked Data is about using the Web to connect related data that wasn't previously linked, or using the Web to lower the barriers to linking data currently linked using other methods. This site exists to provide a home for, or pointers to, resources from across the Linked Data community. ·
W3C POWDER Working Group. The POWDER Working Group is specifying a protocol for publishing descriptions of (e.g. metadata about) Web resources using RDF, OWL, and HTTP. ·
by: John Erickson. January 19, 2010. DataCite and linked data — or, more to the point, the DOI and linked data — are in essence made for each other. A longer answer is that the DOI infrastructure provides conveniences, such as multiple resolution, and also certain advantages, such as security, as they pertain to referencing and accessing scientific and other datasets. The bottom line is that while the DOI infrastructure does depend upon the non-HTTP protocols of the Handle System “under the hood,” from the consumer’s perspective DOI-based name resolution can (and usually does) operate completely within the “web space.” For linking to articles or datasets, the more familiar URI form of DOIs which combines a given DOI with the URL of a Handle System proxy (e.g. http://dx.doi.org/10.1109/MIC.2009.93) may be used instead of the “native” DOI form. ·
voiD (from "Vocabulary of Interlinked Datasets") is an RDF based schema to describe linked datasets. With voiD the discovery and usage of linked datasets can be performed both effectively and efficiently. A dataset is a collection of data, published and maintained by a single provider, available as RDF, and accessible, for example, through dereferenceable HTTP URIs or a SPARQL endpoint. ·
When you submit a uri, it will be matched against datasets with a void:uriRegexPattern property. Matching datasets will have their void:sparqlEndpoints queried for a description of that URI ·
Civil War Data 150 (“CWD150”), is a collaborative project to share and connect Civil War related data across local, state and federal institutions during the sesquicentennial of the American Civil War, beginning in April of 2011. The project will utilize Linked Open Data to find and create connections between archives and help increase the discovery of these resources by researchers and the general public alike. ·
Rudi Studer, V. Richard Benjamins, and Dieter Fensel. Data & Knowledge Engineering25(1-2):161--197 (March 1998)Definition Ontology page 25: An ontology is a formal, explicit specification of a shared conceptualisation. A ‘conceptualisation’ refers to an abstract model of some phenomenon in the world by having identified the relevant concepts of that phenomenon. ‘Explicit’ means that the type of concepts used, and the constraints on their use are explicitly defined. For example, in medical domains, the concepts are diseases and symptoms, the relations between them are causal and a constraint is that a disease cannot cause itself. ‘Formal’ refers to the fact that the ontology should be machine readable, which excludes natural language. ‘Shared’ reflects the notion that an ontology captures consensual knowledge, that is, it is not private to some individual, but accepted by a group..