Zero Blue Links: Search After Traffic

by Aaron Bradley on January 29, 2015

in Search Engines, Semantic Web, SEO

Zero Blue Links: Search After Traffic

Imagine a world in which there was still an internet, but no websites. A world in which you could still look for and find information, but not by clicking from web page to web page in a browser.

A world where you could search the web from any internet-connected device without needing to visit a website to start your search.

A world where search engines provided answers directly to your questions, rather than returning a list of websites where an answer might be found.

It's a world not all that different than our own, because of course we already live in a world where we can ask questions by diverse means on diverse devices and expect a direct answer in response.

But it's a pretty different world for conducting business on the internet from that which existed even few years ago, and it's especially different at the point where any sort of internet transaction is overwhelmingly likely to begin – with a search query.

Because this shift away from a website-centered internet has been gradual, and because this shift is not (and will never be) complete, business owners and marketers have been slow to recognize the long-term implications of this change and react meaningfully to it.

But before I start to examine the consequences for search engines and search marketing of a world that's decreasingly reliant on website visits, I'd like to start with a parable.

A parable about how the world has changed for search engines like Google. And it's a parable because it's not about search engines, but about a movie, an app, some user ratings, and a number of sub-optimal website experiences.

A parable: is this Muppets movie worth watching?

I'm a movie fan plagued by more choices than I have time. So when I'm looking through the cable listings trying to decide what movies to record, I like to consult the ratings and reviews provided by other users.

For years I've used Zap2it to look at TV listings. Zap2It displays ratings but they're typically blank.

Zap2It In-Line listing for the movie Muppets Most Wanted

The website of the premium cable channel that carries this film, Movie Central, is no better: there's a provision for user ratings, of which there are none.

Movie Central listing for the movie Muppets Most Wanted

Given that both Zap2It and Movie Central list upcoming programs, it makes sense that one sees few or no ratings for each program: the users most likely to look for information about an upcoming program are the least likely to have already seen it.

Because the movie in question is a major Hollywood film that's already played in theaters, it also makes sense that there should be ratings and reviews readily available for the flick – as indeed there are, although one has to navigate away from the site to get to them.

Searching the sting Muppets Most Wanted from Movie Central to get IMDb ratings for the movie

What doesn't make sense is for Zap2It and Movie Central to rely solely upon their own user ratings to begin with. If these services wanted to be useful, why not provide me with readily-available movie ratings and reviews up front, rather than forcing me to leave their site and hunt these down?

Which brings me to the app provided by my cable service, Optik TV from TELUS.

In that guide when you tap on an item to get program details there are four links near the bottom of the screen: news, Wikipedia, IMDb and YouTube.

The Optik TV app TV listings and program detail screens

When you tap on the IMBb link, IMBb ratings and movie details are made available. When you tap on the Wikipedia link, the Wikipedia page for the movie is made available, including the "Critical response" section. And from either of these entries one can tap back to the program details page, and from that back to the TV listings themselves.

The Optik TV linked IMBb and Wikipedia entries for the movie Muppets Most Wanted

I almost wet myself when I first used this app.

Not because it uses breathtaking technology, or is especially innovative, but because it provides exactly the sort of functionality you'd expect of a TV listings service – in such stark contrast to the web pages that could readily offer such functionality, but don't.

Those sub-par web pages practically scream a principle that's responsible for making them sub-par web pages: keep users in our clutch. That is, keep users on our website, and ensure that any interactions that occur support the goal of attracting and keeping users on our website.

Even if this means providing no ratings for a movie that has been rated by 16,000 people on IMBd.

The Optik TV app, though, is freed from these constraints because of functional demands. That is, the functional requirements of the app removed the marketing friction that's insinuated itself into web processes.

Freed from this friction, the app ended up both useful and self-sufficient. The didn't need to include a useless and self-serving program rating function because it would have increased the technical complexity of the app without providing a benefit to anybody – the cable provider or its customers.

For the record Optik TV has both a web-based guide/remote recording application and, of course, an on-screen guide/recording application. While there's much to be said about the relative merits of both of these in comparison to the app (to which they're both inferior), they're somewhat a non sequitur in the narrow context of this parable because neither provides ratings of any sort.

The listing for the movie Muppets Most Wanted as it appears on the Optik TV remote record website application and on the Optik TV on-screen guide

(Suffice it to say that the web-based service has internal scrollbars and requires … Silverlight. Seriously.)

I call this a parable because the forces of form and function that gave birth to this app closely reflect the changing demands being placed on modern search engines.

Before I move my focus to those search engines, let me leave you with two observations about the nature of that app which helps explain why it is so superior to its website-based alternatives.

Or, to frame those observations another way, one thing the app is and one thing the app isn't, spelled out in the prior sentence.

It is an app – that is, an application designed for use on mobile devices.

And it is not a website.

From parable to paradigm: the already-passé structured snippet

One upon a time and a very good time it was there a user coming down along the road and this user that was coming down along the road met a nicens little search engine named baby Google.

This user asked a question of the baby Google and the baby Google and smiled and gurgled and then sputtered out a gaggle of website addresses where the user might be able to find an answer to his question.

Then the baby Google grew up.

The grown-up Google, rather than simply trying to find web pages containing the same words employed by the user in their query, and then providing the user with a list of those web pages, is able to use two things it has learned in order to provide the user with a direct answer.

What is the Knowledge Graph? - the question and Google's answer

First, it has learned to better understand the question being asked, and especially about the things the user is asking about. It has come to understand, for example, that "flights from YVR to YYZ" means the same thing as "air travel vancouver toronto pearson", even though the queries don't share any of the same words.

Flights from YVR to YYZ and Air Travel Vancouver Toronto Pearson - Queries that Google understands are equivalent

Second, it has learned to extract information from web pages and return that information in response to queries, rather than simply index the words that appear on the web page and return those web page addresses in response to queries that contain those words.

The IMDb list of actors in Casablanca and the Google result for a query about actors in Casablanca

Better understanding of queries (epitomized by Hummingbird), the ability to extract and retrieve data (epitomized by the Knowledge Graph). That, in a nutshell, is semantic search.

But I'm not here to describe, for the zillionth time, the meaning of semantic search. If you want to learn a bit more about the topic you can check out one of the articles I've written on the subject, or come chew the fat with me and other geeks over on Semantic Search Marketing.

No, far more interesting now than semantic search are the implications of what must now be regarded as its success, and its impact on the future of search marketing. More interesting because the impact of that success on the traditional web ecosystem is huge, and because – despite this massive change – the vast majority of search marketers continue to approach SEO in 2015 like it's 1998.

It's still 1998 for most search marketers because while the tactic used by SEO may have changed – and even changed drastically – in the 17 years since Google launched their search engine, the goal of SEO efforts hasn't changed one iota: drive traffic to websites.

In fairness, there's still an abundance of pages with ten blue links that support this goal, and a marketer transported directly from 1998 to the present day would be perfectly comfortable with many of the results they'd encounter today.

A classic Google SERP:  ten blue links

When semantically-fueled search features appear, however, they're seen through that single-minded lens. Does this feature improve or retard my ability to drive traffic from a search engine to a website?

This was certainly the question most marketers asked when Google introduced its structured snippet, a summary of key information about the topic of a particular web page (in semantic parlance, a list of properties about an specific entity and their corresponding values ).

A Google structured snippet for a Nikon D7100 camera

Marketers generally greeted the appearance of structured snippets without malice, not because they interpreted them as a change to the search-results-to-website-traffic model, but because they didn't see them threatening that model. Indeed, they've been generally seen as a desirable visual enhancement to a search snippet along the lines of a rich snippet; in the words of Ann Smarty, "structured snippets make the listing stand out and seem to spur some interest."

These ain't your daddy's rich snippets, though.

Not so much because they're the future of search, as that they presage what's coming. This is the future of search.

A Google test of sponsored results triggered by the query "nikon d7100"

Google produces structured snippets by aggregating facts about about a particular entity, derived by parsing web tables found for individual instances of that entity. The direction pointed to by structured snippets aren't individual entries for web pages augmented with a few factoids, but to an aggregate entry for whatever's described by a number of web pages, listing common properties about that thing.

(It's worth taking a moment to recall the Telus app in the context of these very different search results. For a user querying "nikon d7100" on their mobile form what's the better experience: navigating back and forth between multiple websites – which may or may not be optimized for mobile – and mentally aggregating information about camera specifications, pictures of the camera, consumer reviews, similar products and different vendors' offers, or having all that information available in a single table?)

Conceptually, this comes close to a complete inversion of the search-results-to-website-traffic model, as its direction is website-data-to-search-results.

From documents to data

What do Google's structured snippets, Knowledge Graph verticals, "direct answers", sports scores, movie carousels, step-by-step instructions and similar features share in common? They're all data returned in response to a search query about a thing.

Sometimes a reference to the document from which the data was derived is present, and sometimes not (when the data source is not a web document this obviously isn't possible), but it is data that's being provided, rather than a document summary.

The traditional search snippet of ten blue links is exatly that, though: a summary of the document that may or may not contain the desired information about a thing or things referenced in a search query, in order to aid the searcher in determining whether or not they should click through to that document.

It's easier and more enlightening, I think, to see the evolution of search results in terms of the gradual supplanting of document references by data than it is to infer that direction through the enumeration of individual features.

One can even divide what appears on a search engine results page into each of these categories. When one does so with the examples just provided, that evolutionary path is unmistakable.

The evolution of Google search results:  from document summaries to data

Are these results cherry-picked? You bet. Given that can one assert with any confidence these aren't just tests, and that we're going to continue to see more data and fewer documents in the search results?

Yes. Aside from evidence one can cite to show that's the case (for example, as recorded by the Moz Google SERP Feature Graph, direct answers are now present in 4.9% of results, up from 3.8% in October 2014), for a broad range of queries data is exponentially more useful than documents.

Not because a fact exposed in search results is necessarily superior to a fact that's found in a search results-linked document, but because of the things you can do with data that you simply can't do with documents.

Data wants to be linked

There's an adage, seemingly coined by Stewart Brand in the late 1960s, that "information wants to be free."

Leaving aside for a moment the contentiousness of this phrase in the digital age (especially the "free" part), facts that can be extracted from web documents or provided directly to data consumers "want" something else: to be linked to other facts.

It's this that makes data – "facts", "things" – so irresistible to Google that they spent a guestimated $100 million to acquire Metaweb, and heavens knows how much money and effort to develop the Knowledge Graph.

I'm not necessarily speaking here of "linked data" as it has been formally described by semantic web technologists, although Google's Wikipedia-derived definition of linked data isn't a bad thing to keep in mind when exploring the topic.

Google's Wikipedia-derived definition of linked data

At a more basic level, data simply becomes more useful when you're able to interconnect one piece of data with another. An environment where data is linked gives any single fact context, allows one fact to be compared to another, and facilitates the exploration of a topic in-depth.

Take, for example, this Google query response ("query response" being the new SERP) for the search "gdp of india".

Google query result for "gdp of india" - Click to enlarge

Not only is the data requested provided, but a host of related information – linked data – is also presented: other Indian economic indicators, the GDP of other countries, further information about India, and so on.

Once you've got a handle on what entities are being represented, and formally know some things about them, the usefulness of linking like things together makes the impetus to do so inexorable. A TV recording app is much more useful when its listings are linked to IMDb, itself more useful than a book containing identical content because its entities are interlinked.

If the only thing Google could draw on was a index of the textual content of web pages (documents), none of this would be possible. In 1998 the best thing Google could provide to a searcher was ten blue links, with the hope that one of the listed documents contained the requested information.

Now it can increasingly call on a repository of organized data that's been extracted from those documents or has been, well, otherwise acquired.

Did you notice the "Explore more" link under the chart? This is where it leads.

Google Public Data chart for "gdp of india" - Click to enlarge

While this further illustrates how data about an entity opens up all sorts of possibilities once it is interlinked with other data, the chart and the page of data from which it is linked reveals another impending shock to the system for the traditional search marketer.

This is not related to the data itself, but the sources from which that information is derived.

The API-wrought web

Take note not only of the anchor for the data source attribution, but the target URL. - One of the data sources for Google

Did Googlebot crawl (eschewing, meticulously parsing tables and aggregating facts about the Indian economy à la structured snippets. Not likely. Nor will you find structured data on, say, the page about India.

Google doesn't need a website to get the information it needs to provide a searcher with the GDP of India. URIs, absolutely, but not a website.

Information about the API

As search marketers grapple with the horror of direct answers and other document-derived data presented in the SERPs, a lot of discussion has surrounded the presence, prominence and URL target of the source attribution link.

Is the attribution a suitably outstanding CTA? Should webmasters have the ability to instruct Google which link target to use? Is it sufficiently effective that I should be providing my data to Google in a easy-to-digest fashion, or should I be withholding data in the hopes of preventing the generation of ans answer box to begin with?

Yolandi Visser from the video for "Baby's on Fire" by Die Antwoord

These are the wrong questions for forward-facing marketers to worry about, because they're focused on the precise and sole location where a piece of information resides, and in the emerging world of APIs a clickable web address is a decreasingly important commodity.

What's important isn't where that information resides, but the value of the information itself.

In a recent article for the Atlantic Robinson Meyer explains the centrality of the API to re-imagining of the Smithsonian's Cooper Hewitt Museum, appropriately enough titled. "The Museum of the Future Is Here".

The central role played by an API in the recently re-imagined Cooper Hewitt Museum

The API allows visitors to use a type of electronic pen to touch items they find of interest. "When they leave, they will have to return the pen, but information about and high-resolution photos of the object will be waiting for them."

While the rabbit hole of "The Internet of Things" obviously beckons here, the point is that the API allows the Museum's digital assets to be conceived of as things in their own right, rather than as objects strictly contained in a bunch of web documents. The flexibility provided by the API keeps the focus on the information rather than the mechanism by which that information is conveyed. Says Aaron Straup Cope, lead engineer at Cooper Hewitt Labs:

[The] API is there to develop multiple interfaces. That’s the whole point of an API—you let go of control around how people interpret data and give them what they ask for, and then have the confidence they'll find a way to organize it that makes sense for them.

For much of the information that we care about, websites, in fact, might just turn out to be an artifact of the early days of the web, a necessary bridge as we figured out how to better create, store and share information over the internet.

Two of the world's top ten websites are not websites

Everyone's always talking about that Twitter. Must be one kick-ass website, chock full of great content, right?

The Twitter "home page"

Er, okay, then what about Facebook? Depending on whose metrics you read, it's the most popular website in the world!

The "home page" of Facebook

Neither Twitter nor Facebook, of course, have "home pages". They have addresses –, – that mimic a traditional web page, but rather than providing access to a document, what these sites provide is access to a stream of data.

Accordingly we can access the stream in a number of different ways, ways which need not entail a website visit. Facebook clobbers all other apps when it comes to time spent: Facebook doesn't need to herd everyone over to a web page to sell them something.

Which is something to keep in mind when grappling with the changes wrought by an API-wrought web: Twitter and Facebook carry their advertising with them.

To mobile and beyond!

Mobile has forced everyone's hands. The crappy user experiences people have long endured on desktop computers can't be supported on mobile devices: an interface that's merely annoying on a laptop is all but unusable on a smartphone.

Mobile is also putting pressure on location-specific conversion and monetization strategies, and especially on those that rely on search results driving a user to a URL where then they're required to take another action, like initiating a purchase or filling out a lead generation form.

And by "mobile" I mean "the humongous and growing collection of devices that are connected to the internet." Website-bound strategies fare poorly in environments without websites.

News aggregator and streaming service Watchup

Put the usability demands of mobile and this proliferation of internet-connected environments together and it's a perfect storm of change directed at traditional approaches to the delivery of content, and the use of content as a marketing asset.

The BBC, in its recent report "Future of News: News v Noise", not only acknowledges the changes wrought by the proliferation of content delivery mechanisms, but also how extracted, organized and connected data – as I dicussed above – drives innovation. In a section titled "Content everywhere" the report's authors have this to say:

Smaller and more powerful devices and wearable technology – phones, cameras, screens – will allow people both to create and consume high quality content more easily and more cheaply.

What's the role of data in this? Under the heading "Using data" we find:

The challenge of using data effectively will be central – whether that means data about how our content is being consumed, making wide use of data sources in our journalism, or managing and structuring the data around our own content.

The following section, which ties together the concepts of ubiquitous connectivity and data use, is worth quoting at length for its perspicuity.

The ability to source, represent and make sense of large volumes of data will be vital. There will be big opportunities for experimentation and innovation with content, and we will need smarter ways of curating and managing both our own and other people’s. Linked data should make it easier to remix and distribute all the components of our journalism – text, video, audio, graphics or data – to meet the needs of different platforms, devices and users.

We will need to find new ways of connecting with individuals and serving their specific needs, whilst also retaining an overview of what is most important and most interesting, in our own editorial judgment.

All this may mean redefining how we think of news, so that we see it as a service, comprising not just news stories, but also the relevant data, context and information that people need, delivered to fit into their lives.

Compare and contrast this vision with the goal of most "content marketing."

Content marketing and the long twilight of click-bait

There is absolutely a place for great data and great documents and great digital media – "content" – in marketing.

But much marketing content is, as often as not, to content as stock photos are to photos (and of course stock photos adorn a lot of "marketing content"). It lacks independent value.

Viable marketing content should inform or entertain or otherwise be useful – and useful in its own right – to the consumers of that content. Marketing content whose sole raison d'être is to get its consumers to click through to a web page is pure bait-and-switch.

The pressures of a multi-device, API-driven, linked data world are actually incredibly liberating for content marketers, though. Because once it ceases to be all about driving traffic to your website, your content can be used in diverse digital environments.

And if driving website traffic was no longer the primary goal of search, but rather conveying whatever information it is you want to convey, Google becomes an ally rather than an adversary.  It's making your content available on their platform.

You know, in a good way.

The revolution does not have a TL;DR>

Websites will not disappear, and they'll almost certainly be an enormous part of what we do on the Internet, and how we do it, for a long time to come.

Accordingly, for most queries search engines will continue to return ten blue links, or something very much like ten blue links, for a long time to come.

But search engines will continue to favor a direct exchange of information with its users over mediating visits between websites. And users of search engines will increasingly be the owners of smart phones and smart watches and smart automobiles and smart TVs, and will come to expect seamless, connected, data-rich internet experiences that have nothing whatsoever to do with making website visits.

You'll hear fairly frequently that "the fundamentals of SEO haven't changed," but the transition away from a simple SERP-to-website model puts lie to this aphorism. Driving traffic from the search results to a website is as fundamental to traditional SEO as it gets.

So the adaptations search marketers will be forced to make will be strategic rather than tactical. No amount of tricks 'n tips will turn things around when the end game has changed.

But it's exciting, because just as the pressures of the mobile web and the utility of linked data conspired to make an app published by a cable provider useful, so will these same sort of pressures force search marketers, in order to realize their business goals, to focus squarely on what's important to their customers.

{ 9 comments… read them below or add one }

1 AJ Kohn February 10, 2015 at 1:15 pm

Excellent points about how we’re moving more toward data rather than documents. But as you say, documents will still be important in many ways.

What I take from the change is that the data allows Google to present multiple forms of intent for a query at one time. That just can’t be easily done with ten blue links.

And Google knows that the data will still likely be contained in documents for the foreseeable future, so not doing ‘right’ by them runs contrary to their goals. As such, I think a savvy marketer is thinking about how their data CAN show up in that aggregation because that’s where more people will begin to click and finally land on your site – not based on a document per se but because the data it contains has been identified by Google and the user as useful.

I’m not sure I’m making myself clear. Long story short, those who lament this change are the ones who likely don’t provide enough value to move ahead. Those who do should think about how they can use this new dynamic to drive more authority and more traffic in the long run.


2 Aaron Bradley February 10, 2015 at 1:42 pm

Thanks a lot AJ! You made yourself clear to me, but I appreciate your additional comments nonetheless.

Especially as, indeed, “those who lament this change are the ones who likely don’t provide enough value to move ahead.”

Because if “content” ceases to be of value when it ceases to drive traffic to a website, just how valuable was it to begin with? 🙂


3 Patrick Coombe February 17, 2015 at 12:23 pm

This has been a huge concern of mine over the last year or two, like many other SEO’s. It started (for me) when Google started answering “what is my IP” and has continued ever since. I think a lot of businesses that rely on information that Google is able to answer better watch their back (lyrics, etc).

You are right, it is very exciting. I think the SEO’s that stay on top of this stuff are going to be the ones that continue to see success beyond 2015.

Over the last year, I’ve really become a student / fan of structured data and semantic search. Am I convinced it is a ranking factor? No, however I do think Google scores website based on usability / UX so if a website is marked up and easy for G to crawl, added points for your site.

I just discovered your world (G+ community, Twitter, etc) and am glad I found another like minded individual I can learn from.


4 Frank Gainsford March 23, 2015 at 6:57 am

Aaron Bradley many thanx for a truly epic article discussing the future issues associated with search and Content. this was a very interesting read and truly appreciated.

Those folks who have been creating content just for the sake of having content available online to ensure better SEO are in for a shock, as they are mostly very repetitive and are not saying anything new, nor do they add any specific user value to the web site where this content is housed..

Then there is the issue of SEMANTIC FOOTPRINTS and how these integrate into the many API tools where users can consume data in different formats, and how these are catagorised and listed in search.


5 Tim O'Keefe May 14, 2015 at 10:46 am

Interesting and well articulated analysis of where we are going. I love the data stream vs document contrast that you point out. Scary but exciting all at the same time. What always pops in my mind on the future of search is what happens to the dataless lock n key store do? We ate my offices have pondered that perhaps someday Google would just rather the searcher to not even visit the website but see it via the seven pack.


6 Aaron Bradley May 15, 2015 at 9:46 am

Thanks Tim, appreciate your comments.

While Google in many circumstances might not direct a searcher to a website (especially those searching on mobile devices), I think its preference isn’t necessarily that per se, but to provide searchers with an answer to query with the minimum amount of friction. Sometimes this will be accomplished by referring users to a website, but sometimes – and increasingly frequently – not.


7 David Portney May 18, 2015 at 3:38 pm

Hey Aaron!

What a terrific peek-around-the-corner! Makes me wonder if perhaps in the future Google Webmaster Tools will have an “upload your data table” feature, sort of like notifying them about an XML sitemap but maybe via simple spreadsheet format with pre-defined fields for webmasters, er, Internet Information Providers, to complete. So instead of SMBs struggling with creating APIs to feed big G, they just upload a spreadsheet. Or maybe GWT will just add an “add your API” to the interface. Too easy to spam?

Very interesting and forward-thinking stuff here; thanks, Aaron!
PS: somehow missed you at the SearchFest after-party this year, maybe next year!


8 Aaron Bradley May 19, 2015 at 1:30 pm

Thanks for your comments David – much appreciated.

I don’t think an “unload your table” sort of feature is unthinkable at all (and along the lines you suggest there’s work underway to make those tables more semantically meaningful). Easy to spam in the abstract, yes, but I think Google is applying increasingly sophisticated mechanisms that allow it to determine whether or not a source is sufficiently trustworthy enough to allow “pure data” into the index.

Sorry to have missed you at the after-party – odd, I was there!


9 J May 26, 2015 at 12:54 am

Interesting article but it misses out how people actually feel about all of this.

History shows that any person or organisation that tries to control everything eventually falls from grace.

Google is quite literally hated by many for its attempt to own and carve up the internet, many people would like to see Google take a fall – and outside of the US Google has other issues with its image ( US domination / survellance ).

The website keeps information where it belongs with interested parties, Google is not interested in bird watching, car mechanics or any other thing you care to mention.

By divorcing itself from the point of origin – real people who are actually interested in something Google risks alienating itself further.

Also come on!! Who on earth would want digested content rather than joining an online community with a shared interest ( say photography ).

Google is becoming Big Brother and there are lots of people who do not like it one bit!


Leave a Comment

Previous post:

Next post: