Posted by: John Erickson | March 28, 2010

Long Tails and “Scaling Down” Linked Data Services

This post first appeared in November, 2009 in the Blogger version of this blog. It is updated here as I believe it introduces points relevant to Leigh Dodds’ recent post, Enhanced Descriptions: “Premium Linked Data.” I’ve freshened it as appropriate based on progress since November…

Chris Anderson’s newest book FREE: The Future of a Radical Price received some attention this summer, but I’ve actually been meditating on principles he laid out three years ago in his blog post, Scaling up is good. Scaling down is even better. In that post he marveled at Google et.al.’s ability to scale down, to run themselves efficiently enough to serve users who generate no revenue at all. Anderson’s principles are guidance on approaches to conducting business such that even if only a tiny percentage of ones visitors “convert” into paying customers, by ensuring this small percentage is of a very large number one can still achieve big-time profitability.

My goal with this post is to consider how these ideas might be applied to the domain of Linked Data, and specifically how they pertain to the provision of unique data that adds real value to the greater “Web of Data.”

In his blog Anderson gives us four keys to scaling down: Self-service, “Freemium” services, No-frills products and Crowdsourcing…

1. Self-service: give customers all the tools they need to manage their own accounts. It’s cheap, convenient, and they’ll thank you for it. Control is power, and the person who wants the work done is the one most motivated in seeing that it’s done properly.

“Self-service” applies to linked data services in oh-so-many ways! Self- service in this case is not as much about support (see “Crowdsourcing,” below) as it is about eliminating any and all intervention customers might need to customize or specialize how services perform for them. In principle, the goal should be to provide users with a flexible API and let them figure it out, with the support of their peers. Ensure that everything is doable from their side, and step out of the way.

Note #1 (29 Mar 2010): A great recent example of this is the OpenVocab Project, launched by Ian Davis of Talis. OpenVocab “enables anyone to participate in the creation of a open and shared RDF vocabulary. The project uses wiki principles to allow properties and classes to be created in the vocabulary.”

The (negative) corollary is this: if an organization must “baby sit” its customers by providing specialized services that require maintenance, then they own it and must eat the cost. If instead they allow specializations to be a user-side function, their users own it. But the users won’t be alone; they’ll have the support of their community!

2. “Freemium” services: As VC Fred Wilson puts it, “give your service away for free, possibly ad supported but maybe not, acquire a lot of customers very efficiently through word of mouth, referral networks, organic search marketing, etc, then offer premium priced value added services or an enhanced version of your service to your customer base.” Free scales down very nicely indeed.

There are any number of ways providers might apply this concept to the linked data world:

 Free Access   Premium Access 
 Restricted vocabulary of assertions   Full access, all assertions 
 Limited query rate   Unlimited query rate 
 Limited query extent   Unlimited query extent 
 Limited data   Unlimited data size 
 Read-only   Term upload capability 
 Narrow reuse rights   Broad reuse rights 
 Community support   Private/ dedicated support 
 …   … 

Note #2 (29 Mar 2010): In his recent post Enhanced Descriptions: “Premium Linked Data”, Leigh Dodds’ provides a great freemium/premium example: a base dataset provided for free, and an enhanced set provided at a premium and exposed via his proposed ov:enhancedDescription vocabulary term, which he defined in OpenVocab.

Note #3 (29 Mar 2010): Derek Gordon just pushed out a great piece, The Era Of APIs, that argues “APIs are at work reshaping the ways in which we understand search today, and will challenge our profession to stretch, grow and change significantly in the coming years.”

3. No-frills products: Some may come for the low cost, others for the simplicity. But increasingly consumers are sophisticated enough to know that they don’t need, or want to pay for premium brands and unnecessary features. It’s classic market segmentation, with most of the growth coming at the bottom.

In the linked data world, achieving “no frills” would seem easy because by definition it is only about the data! For linked data a “frill” is just added complexity that serves no purpose or detracts from the utility of the service. Avoid any temptation to gratuitously “add value” on behalf of customers, such as merging your core graph with others in an attempt to “make it easy” for them. Providers should also avoid “pruning” graphs, except in the case of automated filtering in order to differentiate between Freemium and Premium services.

Note #4 (29 Mar 2010): Providers should weigh this very carefully. It might well be that a “merged” graph truly is a value-added service to users, for which they are willing to pay a premium. My point is simply to avoid the gratuitous and respond to customer needs!

4. Crowdsourcing: From Amazon reviews to eBay listings, letting the customers do the work of building the service is the best way to expand a company far beyond what employees could do on their own.

By now it is not only obvious, but imperative that providers foster the development communities within and around their services. Usually communities are about evangelism, and this is certainly true for linked data providers, but increasingly service provides realize well-groomed communities can radically reduce their service costs.

Linked data providers should commit themselves to a minimum of direct support and invest in fostering an active community around their service. Every provider should have a means for members of their community to support each other. Every provider should leverage this community to demonstrate to potential adopters the richness of the support and the inherent value of their dataset.

Finally: In a thought-provoking post Linked Data and the Enterprise: A Two-way Street Paul Miller reminds the skeptical enterprise community that they, not merely their user community, will ultimately benefit from the widespread use of their data, and when developing their linked data strategy they should consider how they can “enhance” the value of the Web of Data, for paying and non-paying users alike:

…[A] viable business model for the data-curating Enterprise might be to expose timely and accurate enrichments to the Linked Data ecosystem; enrichments that customers might pay a premium to access more quickly or in more convenient forms than are available for free…

I’ve purposely avoiding considering the legal and social issues associated with publishing certain kinds of enterprise data as linked data (see also this), which I addressed in a post, Protecting your Linked Data on the Blogger version of this blog…

Posted by: John Erickson | March 9, 2010

“This linked data went to market…wearing lipstick!?!”

Paraphrasing the nursery rhyme,

This linked data went to market,
This linked data stayed open,
This linked data was mashed-up,
This linked data was left alone.
And this linked data went…
Wee wee wee all the way home!

In his recent post Business models for Linked Data and Web 3.0 Scott Brinker suggests 15 business models that “offer a good representation of the different ways in which organisations can monetise — directly or indirectly — data publishing initiatives.” As is our fashion, the #linkeddata thread buzzed with retweets and kudos to Scott for crafting his post, which included a very seductive diagram.

My post today considers whether commercial members of the linked data community have been sufficiently diligent in analysing markets and industries to date, and what to do moving forward to establish a sustainable, linked data-based commercial ecosystem. I use as my frame of reference John W. Mullins’ The New Business Road Test: What entrepreneurs and executives should do before writing a business plan. I find Mullins’ guidance to be highly consistent with my experience!

So much lipstick…
As I read Scott’s post I wondered, aren’t we getting ahead of ourselves? Business models are inherently functions of markets — “micro” and “macro” [1] — and their corresponding industries, and I believe our linked data world has precious little understanding of the commercial potential of either. Scott’s 15 points are certainly tactics that providers, as the representatives of various industries, can and should weigh as they consider how to extract revenue from their markets, but these tactics will be so much lipstick on a pig if applied to linked data-based ecosystems without sufficient analysis of either the markets or the industries themselves.

Pig sporting lipstick

To be specific, consider one of the “business models” Scott lists…

3. Microtransactions: on-demand payments for individual queries or data sets.

By whom? For what? Provided by whom? Competing against whom? Having at one time presented to investment bankers, I can say that “microtransactions” is no more of a business model for linked data than “Use a cash register!” is one for Home Depot or Sainsbury’s! What providers really need to develop is a deeper consideration of the specific needs they will fulfill, the benefits they will provide, and the scale and growth of the customer demand for their services.

Macro-markets: Understanding Scale
A macro-market analysis will give the provider a better understanding of how many customers are in its market and what the short- and long-term growth rates are expected to be. While it is useful for any linked data provider, whether commercial or otherwise, to understand the scale of its customer base, it is absolutely essential if the provider intends to take on investors, because they will demand credible, verifiable numbers!

Providers can quantify their macro-markets by identifying trends, including demographic, socio-cultural, economic, technological, regulatory, natural. Judging whether the macro-market is attractive depends upon whether do the trends work in favour of the opportunity.

Micro-markets: Identifying Segments, Offering Benefits
Whereas macro-market analysis considers the macro-environment, micro-market analysis focuses on identifying and targeting segments where the provider will deliver specific benefits. To paraphrase John Mullins, successful linked data providers will be those who deliver great value to their specific market segments:

  • Linked data providers should be looking for segments where they can provide clear and compelling benefits to the customer; commercial providers should especially look to ease customers’ pain in ways for which they will pay.
  • Linked data providers must ask whether the benefits their services provide as seen by their customers are sufficiently different from and better than their competitors, e.g. in terms of data quality, query performance, more supportive community, better contract support services, etc.
  • Linked data providers should quantify the scale of the segment just as they do the macro-environment: how large is the segment and how fast is it growing?
  • Finally, linked data providers should ask whether the segment can be a launching point into other segments.

The danger of falling into the “me-too” trap is particularly glaring with linked data, since a provider’s competition may come from open data sources as well as other commercial providers: think Encarta vs. Wikipedia!

Having helped found a start-up in the mid-1990s, I am acutely aware of the difference between perceived and actual need. The formula for long-term success and fulfillment is fairly straightforward: provide a service that people need, and solve problems that people need solved!

Notes:

References

  1. John W. Mullins, The New Business Road Test (FT Prentice Hall, 2006)
Posted by: John Erickson | February 4, 2010

DOIs, URIs and Cool Resolution

Posted by: John Erickson | February 3, 2010

Community as a Measure of Research Success

In his 02 Feb 2010 post entitled Doing the Right Thing vs. Doing Things Right Matthias Kaiserswerth, the head of IBM Research – Zurich sums up his year-end thinking with this question for researchers…

We have so many criteria of what defines success that one of our skills as research managers is to choose the right ones at the right time, so we work on the right things rather than only doing the work right…For the scientists that read this blog, how do you measure success at the end of the year?

Having just “graduated” after a decade with another major corporate research lab, this is a topic that is near and dear to my heart! My short answer was the following blog comment…

I can say with conviction that the true measure of a scientist must be their success in growing communities around their novel ideas. If you can look back over a period of time and say that you have engaged in useful discourse about your ideas, and in so doing have moved those ideas forward — in your mind and in the minds of others — then you have been successful…Publications, grad students and dollar signs are all artifacts of having grown such communities. Pursued as ends unto themselves, it is not a given that a community will grow. But if your focus is on fostering communities around your ideas, then these artifacts will by necessity follow…

My long answer is that those of us engaged in research must act as stewards of our ideas; we must measure our success by how we apply the time, skills, assets, and financial resources we have available to us to grow and develop communities around our ideas. If we can look back over a period of time — a day, a quarter, a year, or a career — and say that we have been “good stewards” by this definition, then we can say we have been successful. If on the other hand we spend time and money accumulating assets, but haven’t moved our ideas forward as evidenced by a growing community discourse supporting those ideas, then we haven’t been successful.

A very trendy topic over the past few years has been open innovation, as iconified by Henry Chesborough’s 2003 book by the same name. Chesborough’s “preferred” definition of OI found in Open Innovation: Researching a New Paradigm (2006) reads as follows…

Open innovation is the use of purposive inflows and outflows of knowledge to accelerate internal innovation, and expand the markets for external use of innovation, respectively. [This paradigm] assumes that firms can and should use external ideas as well as internal ideas, and internal and external paths to market, as they look to advance their technology.

In very compact language Chesborough (I believe) argues that innovators within organisations can best move their ideas forward through open, active engagement with internal and external participants. [1] Yes, individual engagement could be conducted through closed “tunnels,” but for the ideas to truly flourish (think Java) this is best done through open communities. I believe the most important — perhaps singular — responsibility of the corporate research scientist is to become a “master of their domain,” to know their particular area of interest and expertise better than anyone, to propose research agendas based upon that knowledge, and to leverage their companies’ assets to motivate communities of interest around those ideas. External communities that are successfully grown based on this view of OI can become force multipliers for the companies that invest in them!

To appreciate this one needs only to consider the world of open source software and the ways in which strong communities contribute dimensions of value that no single organisation could… I’ll pause while you contemplate this idea: open-source like communities of smart people developing your ideas. Unconvinced? Then think about “Joy’s Law,” famously attributed to Sun Microsystems co-founder Bill Joy (1990):

No matter who you are, most of the smartest people work for someone else

Bill Joy’s point was that that best path to success is to create communities [2] in which all of the “world’s smartest people” are applying themselves to your problems and growing your ideas. As scientists, our measure of success must be how well we leverage the assets available to us to grow communities around our ideas.

Peter Block has given us a profound, alternative perspective on the role of leaders in the context of communities [3]. In his view, leaders provide context and produce engagement. In Block’s view, leaders…

  • Create a context that nurtures an alternative future, one based on gifts, generosity, accountability, and commitment;
  • Initiate and convene conversations that shift peoples’ experience, which occurs through the way people are brought together and the nature of the questions used to engage them;
  • Listen and pay attention.

Ultimately, I believe that successful researchers must first be successful community leaders, by this definition!

Update: In a 4 Feb 2010 editorial in the New York Times entitled Microsoft’s Creative Distruction, former Microsoft VP Dick Brass examines why Microsoft, America’s most famous and prosperous technology company, no longer brings us the future. As a root cause, he suggests:

What happened? Unlike other companies, Microsoft never developed a true system for innovation. Some of my former colleagues argue that it actually developed a system to thwart innovation. Despite having one of the largest and best corporate laboratories in the world, and the luxury of not one but three chief technology officers, the company routinely manages to frustrate the efforts of its visionary thinkers.

I believe Mr. Brass’ analysis is far too inwardly focused. Never in his editorial does Mr. Brass lift up the growing outreach by Microsoft Research, especially under the leadership of the likes of Tony Hey (CVP, External Research) and Lee Dirks (Director, Education & Scholarly Communications), to empower collaboration with and sponsorship of innovative researchers around the world. Through its outreach Microsoft is enabling a global community of innovators and is making an important contribution far beyond its bottom line. I think Mr. Brass would do well to focus on the multitude of possibilities Microsoft is helping to make real through its outreach, rather than focusing on what he perceives to be its problems

Notes:

  • [1] One version of the open innovation model has been called distributed innovation. See e.g. Karim Lakhani and Jill Panetta, The Principles of Distributed Innovation (2007)
  • [2] Some authors have referred to “ecologies” or “ecosystems” when interpreting Bill Joy’s quote, but I believe the more accurate and useful term is community.
  • [3] For more on community building, see Peter Block, esp. Community: The Structure of Belonging (2008)
Posted by: John Erickson | January 26, 2010

Scale-free Networks and the Value of Linked Data

Posted by: John Erickson | January 20, 2010

Protecting and Licensing Your Linked Data

Posted by: John Erickson | January 20, 2010

Thoughts on Securing Linked Data with OAuth and FOAF+SSL

Posted by: John Erickson | January 19, 2010

The DOI, DataCite and Linked Data: Made for each other!

Posted by: John Erickson | January 11, 2010

The Evolution of Linked Data Business Models

Posted by: John Erickson | January 8, 2010

Is Semtweet a Client, a Service or a Nanoformat?

An interesting, multi-faceted discussion has ensued over the last 48 hours on #Semtweet regarding Nova Spivack’s idea for a Semantic Twitter Client (Semtweet). In addition, resources have begun to accumulate in the Semantic Microblogging Twine. I’d like to try to pick at the different perspectives from which the Semtweet crowd seems to be viewing this challenge:

Semtweet is a service: I think most of the crowd has identified that the capabilities described by Nova, while possibly rendered in clever ways by clients, are made possible by distributed services. The discussion has turned the spotlight on recent work in semantic microblogging, with the distributed microblogging prototypes SMOB (Alex Passant, et.al. DERI) and TwitLogic (Josh Schinavier, TWC, RPI) being highlighted (and Twined) amongst others.

Semtweet is a nanoformat: Platforms like TwitLogic and SMOB complement/augment microblogging services “such as” Twitter, but still depend on microposts having been encoded using particular syntactical standards — a microblogging nanoformat — as a basis to generate and persist their useful, value-added mini-graphs. A worthy debate may ensue as to whether the current nanoformats are good enough, or whether new expressive capabilities are required. Excellent summaries of current nanoformatting nanostandards and their significance can be found e.g. in TwitLogic (Josh Schinavier) and Tweet, Tweet, Retweet: Conversational Aspects of Retweeting on Twitter (danah boyd, et.al.) in addition to the previous link.

Nanoformats try to maximise the semantic density of a (typically) 140-character micropost. As existing syntaxes are expanded and new ones introduced, manual text entry becomes harder and manual interpretation impossible; remember the Obfuscated Perl contests of a decade ago? Although Josh has argued in his TwitLogic paper that it is possible to pack machine-parsable semantics into (mostly) natural-language expression, I’m wondering if there might also be room for iGoogle Gadget-like, nanoformat-specific, scripted interface plugins — nanoformatting gadgets — that could help users micropost in more standardized ways. Nanogadgets could be implemented as pop-up mini-forms or WYSIWYG-style interface aids by which users could embed links, data, geolocations, etc. Similar plug-in could help interpret such embeds in value-added ways. This brings us to…

Semtweet is a client: Much of the “semantic” workload for Semtweet is best done in a distributed way by services, but as my note above on nanoformatting gadgets highlights, there is room for a dedicated client. However, I caution the Semtweet community to consider the fact that, based on current Twitter client statics compiled by @twitstat (corroborated by statistics from funkatron, creator of the Spaz client) the web-hosted Twitter client still has the largest share (although the vast majority use is spread between an array of clients, with Tweetdeck being next most popular). What do these numbers mean? First, that we must remember that there are many clients out there and the common denominator is still manual entry and interpretation; second, that there is still plenty of opportunity for new clients that truly add value and especially that change the game..

Semtweet is a “commercial” open source project: A bit surprising — even to me, a recovered DRM guy — is how quickly the open source issue has entered the discussion, having been introduced by entrepreneurs like Jeff and Nova and even myself; personally, I can’t seem to avoid a good (or any) legal argument…er…debate.

From the preceding discussion, we can see that Semtweet could manifest itself as value-added services and clients that are optimised to render those services. From the client perspective, I won’t even go there; I can’t imagine a commercial client; as Nova’s original micropost suggested, we’re talking about “Firefox-like,” even for truly valued-added extensions. Services however are a different matter; while it is easy to see a Semtweet client thriving on completely open data, it is also possible to see it working within the enterprise environment making grouchy old CTO’s happy. Several potential value propositions come to mind there, including the provision of consulting services and support; the cloud-based hosting and operation of proprietary services; and the for-pay provision of value-added capabilities not available at zero cost. Insert the usual GPL licensing discussions here…

« Newer Posts - Older Posts »

Categories