Barbara Starr – Search Engine Land News On Search Engines, Search Engine Optimization (SEO) & Search Engine Marketing (SEM) Sat, 28 Aug 2021 00:32:41 +0000 en-US hourly 1 https://wordpress.org/?v=5.8.1 A Layman’s Visual Guide To Google’s Knowledge Graph Search API /laymans-visual-guide-googles-knowledge-graph-search-api-241935 Fri, 12 Feb 2016 15:36:43 +0000 http:/?p=241935 Columnist Barbara Starr delves into the recently released Knowledge Graph Search API and discusses how SEOs might use this data.

The post A Layman’s Visual Guide To Google’s Knowledge Graph Search API appeared first on Search Engine Land.

]]>

Google released their much-delayed Knowledge Graph Search API on December 16, 2015, which was announced in a post on Google Plus by Freebase. (They did so after retiring Freebase, to support users of the Freebase API.)

The Knowledge Graph Search API allows users to query Google’s Knowledge Graph database to obtain information on the entities contained therein. According to Google, some typical use cases include:

• Getting a ranked list of the most notable entities that match certain criteria.

• Predictively completing entities in a search box.

• Annotating/organizing content using the Knowledge Graph entities.

An illustrated guide is provided below to enable anyone to access this API so its query capabilities can be leveraged and utilized without the need for any programming language skills. The only prerequisite is that you must have a Google account.

The Knowledge Graph Search API: Getting Started

The Knowledge Graph Search API can be found within Google’s APIs Explorer, shown below:

Google-api-explorer-knowledge.jpg

Google’s APIs Explorer,”Knowledge” highlighted [Click to enlarge]

Selecting “Knowledge Graph Search API” will take you to a screen that looks like this:

Google APIs Explorer

[Click to enlarge]

Click the entry above, which will take you here. You may find it helpful to bookmark this link, as it will typically be your starting point.

The Knowledge Graph Search API: Input GUI

You now have a screen with a graphical user interface (GUI) that contains all the parameters contained in the Knowledge Graph API (the details of which you can find here in their developer guide on the topic).

GUI-Google-Knowledge-graph-search-api

Explorer GUI for the Google Knowledge graph search API [Click to enlarge]

The query parameters below map directly to the input fields specified in the API (See original documentation for more info):

Knowledge-Graph-Search-api-Query-parameters

Query Parameters: Knowledge Graph Search API

So let’s look at a few things we can do with this API and see what sort of results we get.

As stated in the developer guide, the API does not return “graphs of interconnected entities,” merely the entities being searched for (with almost no associated attributes or predicates).

The “ids” field refers to the machine-generated identifier (MID), which is key to understanding concepts in both Freebase and the Google Knowledge Graph. A MID is, as per the Freebase wiki, a unique identifier for an entity. It just so happens that Google appears to leverage them and be dependent upon them in some aspects.

This means that it is important to be able to understand and recognize them. Luckily, it is easy to determine what a freebase MID looks like. They typically start with “/m/” and then some very short appended string (e.g., The MID for “Barack Obama” is /m/02mjmr/).

An easy way to now obtain a MID for any entity is now just to enter it into the “query” field in Knowledge Graph Search API.

Google APIs Explorer barack obama query Google APIs Explorer barack obama results

You can also find MIDs in Google Trends. When you select an entity from the autosuggest list that appears as you begin to type in a query, the resulting URL will contain the MID:

Google Trends barack obama Google Trends barack obama URL

Of interest is that Google still uses MIDs as identifiers here, and not the Qids that Wikidata uses, although there are published mappings from the one to the other. In some regards, you can think of the MIDs as akin to the notion of a “primary database key” for the Knowledge Graph.

(Editor’s note: Contributor Clay Cazier recently wrote a great introduction to Wikidata.)

The Knowledge Graph Search API: Output Fields

Response-Fields-Knowledge-Graph-Search-api

Response Fields: Knowledge Graph Search API

Looking at the above, you can see definitions for precisely what is returned in the “Response” area when you execute a request using the Knowledge Graph Search API.

As stated in the specs, a response message contains a list of entities, presented in JSON-LD format and compatible with schema.org schemas (with limited external extensions). Samples of the output are illustrated and interpreted in the Examples section below.

Contrary to my initial intuitive thoughts, querying the API and restricting output to the type “Person” does not provide the full Person Entity with all the associated attributes of that person, but rather just those items listed in the figure above. (It always helps to read documentation prior to playing with data; however, sometimes it is more fun to just start playing with the data. Google does state that if you want a richer, graph-based response, you can move over to Wikidata and obtain the information from that source instead.)

The API does return the MID used by Google, which is useful when applying semantic markup and adding identifiers to fully disambiguate your content, as described here. With Freebase retiring, this may now be the best way to obtain those identifiers. Adding identifiers is probably generally best done in the last stages of applying semantic markup and is very helpful to the search engines in terms of disambiguating your content.

Another interesting field that’s worthy of note is the resultScore, which is defined as “[a]n indicator of how well the entity matched the request constraints.” (It is likely it may be a type of entity metric, as I described in a previous article.)

Examples

Playing with various inputs and looking at the results can provide some interesting insights. I am going to walk through two queries here and see what we can glean from the results.

In the first query, I entered the Query term “President of the United States,” and for the type, I put “Person.” (Types are derived from schema.org, in this case, schema.org/Person.)

knowledge graph search Query

Query: “President of the United States”/Type: “Person”

I then clicked the “Execute” button. This query yields a long output (JSON-LD can tend to be rather verbose). For the purposes of this article, I merely showed the first result page in the figure below. This also provides an example of  what to expect in terms of the response format.

Results-of-Query-1

Results from Query 1: “President of the United States” — Barack Obama comes in first with a resultScore of 1407.[Click to enlarge]

A list of EntitySearchResults is returned, the first of which is Barack Obama, whose MID is /m/02mjmr. The associated resultScore is 1407. The second result is Bill Clinton, with a resultScore of 1341, and then Franklin Roosevelt, with a resultScore of 870. George Bush is next, with a resultScore of 785. The description strings do contain which number President of the US the person is, but the ordering is by the resultScore in descending order as it appears.

The next query I elected to run — “Presidents of the United States” — was similar but instead included the plural form of the term “president” (which would imply I received a list of results, rather than a specific answer).

Running that seemingly similar query in place of the previous term yielded similar results, yet the resultScore for the number 1 entity in the Response was now 251.

The results are illustrated in the diagram below. (In addition, in the image,  you can also see the query itself, in the “GET http:” line, before you see the response with the 200 – “OK” – code.)

results-of-query-2-presidents-of-the-united-states

Results from Query 2: “Presidents of the United States”/Type-restricted to “schema.org/Person” [Click to enlarge]

You can see how the resultScore for the entity Barack Obama now drops to 251.  This is presumably because the intent behind those two queries is very different (although the query “strings” may look very similar).

Visualization Of The Result Data From Query Number 1 In A Triple Store

For the next helpful visualization, I decided to take the output and load it into Gruff, which is the graph-based visualization tool from Franz Inc. (You can download a copy for free.)

I also used Greg Kellogg’s RDF distiller to convert the results from JSON-LD to triple format before I loaded it. I then selected the first result and created the visualization below.

The visualization definitely helps in understanding how “things” are linked together.

Barack-Obama-And-Entity-Search-Result-Visualization-in-Gruff

Visualization of the first Entity Search Result in Gruff

Conclusion And Takeaways

Google’s announcement last week that they now support JSON-LD for rich snippets for products and reviews, coupled with the fact that Knowledge Graph search results are only returned in JSON-LD format, provides a strong inkling that providing semantic markup in JSON-LD format is a worthwhile approach for SEOs to adopt.

Using this format for structured data markup means that there is no longer a need to place markup that is correctly wrapped around the appropriate HTML elements — JSON-LD markup can be placed in the header or footer of the page, which radically simplifies things.

Adding identifiers as the last step of the process, in terms of adding semantic markup to a page, and hence aligning your page entities with those in Google’s Knowledge Graph, is a “must.” Obtaining those identifiers is now a simple matter of looking them up in Google’s Knowledge Graph Search API, as I have described above.

In addition, there is a lot of relevant and must-read coverage on the Knowledge Graph Search API that I did not cover, which you can find here.

Also ensure you adhere to all of the structured data policies specified by Google. There are many details that have changed, and the policies continue to evolve.

Helpful Resources

The post A Layman’s Visual Guide To Google’s Knowledge Graph Search API appeared first on Search Engine Land.

]]>
How Trust & Unique Identification Impact Semantic Search /trust-unique-identification-impact-semantic-search-235673 Wed, 25 Nov 2015 15:04:44 +0000 http:/?p=235673 Is your content trustworthy, and does that matter? Columnist Barbara Starr explores how Google might be using trust as a signal when displaying search results.

The post How Trust & Unique Identification Impact Semantic Search appeared first on Search Engine Land.

]]>

There are many factors that are key to the notion of semantic search. Two that are critical to understand that have not been written about much from an SEO point of view are trust and unique identification.

These two factors lie at the core of the many changes we see happening in search today, alongside Google’s ever-growing knowledge graph and their move in the direction of semantic search.

The Semantic Web Stack

The notion of trust is a key component in the semantic web. Below is an illustration that depicts the semantic web stack, with trust sitting at the top.

Trust is achieved through ascertaining the reliability of data sources and using formal logic when deriving new information. Computers leverage or mimic this factor in human behavior in order to derive algorithms that provide relevant search results to users.

Search Result Ranking Based On Trust

Search Result Ranking Based on Trust is, in fact, the name of a Google patent filed in September 2012. The patent describes how trust factors can be incorporated into creating a “trust rank,” which can subsequently be used to alters search result rankings in some fashion.

People tend to trust information from entities they trust, so displaying search results to users from entities they trust makes a lot of sense (and also brings in a personalization component).

A group at Google recently wrote a paper titled, Knowledge-Based Trust: Estimating the Trustworthiness of Web Sources. The paper discusses the use of a trustworthiness score — Knowledge-Based Trust (KBT) — which is computed based on factors they describe therein.

Below, I have extracted some salient features from the paper that I believe are worth understanding from an SEO POV:

We propose using Knowledge-Based Trust (KBT) to estimate source trustworthiness as follows. We extract a plurality of facts from many pages using information extraction techniques. We then jointly estimate the correctness of these facts and the accuracy of the sources using inference in a probabilistic model. Inference is an iterative process, since we believe a source is accurate if its facts are correct, and we believe the facts are correct if they are extracted from an accurate source.

The fact extraction process we use is based on the Knowledge Vault (KV) project [10]. KV uses 16 different information extraction systems to extract (subject, predicate, object) knowledge triples from webpages. An example of such a triple is (Barack Obama, nationality, USA). A subject represents a real-world entity, identified by an ID such as mids in Freebase [2]; a predicate is predefined in Freebase, describing a particular attribute of an entity; an object can be an entity, a string, a numerical value, or a date.

I also most definitely enjoyed the introduction here:

Quality assessment for web sources is of tremendous importance in web search. It has been traditionally evaluated using exogenous signals such as hyperlinks and browsing history. However, such signals mostly capture how popular a webpage is. For example, the gossip websites listed in [16] mostly have high PageRank scores [4], but would not generally be considered reliable. Conversely, some less popular websites nevertheless have very accurate information.

What can be garnered from this is that SEO practitioners should ensure that all statements written on any website or blog are factual, as this will enhance trustworthiness (which may one day impact rankings).

When it comes to searching for entities in Google, it is clearly evident that they use some form of a trust-based mechanism. Users tend to trust online reviews, so reviews and review volumes are useful to users when they search for a specific product. As a case in point, a search for the product “La Roche Posay Vitamin C eyes” yields the following result in organic SERPs:

Google Search for "La Roche Posay Vitamin C eyes" - Organic Results

Google Search for “La Roche Posay Vitamin C eyes” — Organic Results

The only example that shows up without the enhanced displays associated with reviews (rich snippets) is a page that, when selected, does in fact contain reviews from an authoritative and trusted source (Amazon).

The “most trusted” result is given first, as that comes from the official website of the brand and the reviews page associated with that product. This is a pattern that seems to be quite prevalent in a large majority of product searches at this point in time.

I have written in the past about how another Google patent may utilize reviews in search results in such a manner, and I will quote a relevant portion of the referenced patent here:

The search system may use the awards and reviews to determine a ranking of the list, and may present the search results using that ranking.

Unique Identifiers In E-Commerce

In addition, I have also described in the past how unique identifiers may be leveraged to aggregate reviews by search engines.

Why is this important in the context of reviews in e-commerce?

If a search engine or e-commerce site cannot uniquely identify a product, multiple pages can be created for the same product. This causes those pages to effectively have diluted their “trust rank” and/or link equity in terms of impacting those signals they send to the search engines.

For example, you can see below, in the case of the marketplace eBay, that there are many cases where the same product is listed many times, and hence the reviews are not aggregated on one unique URL:

Search for results La Roche Posay Active C eyes ebay

Search for results “La Roche Posay Active C eyes ebay”

This means that it is critical for merchants to be able to uniquely disambiguate their products internally, if they want to send strong signals in order to rank in organic SERPs for a specific URL.

Ensuring your product listings are correctly and uniquely identified provides this benefit, as it will aggregate the reviews for that product onto the same page/product listing, thereby strengthening the “trust rank” of that page. It ought to have a corresponding effect in terms of avoiding link dilution for that page.

Until recently, this was also an issue on Amazon, but one that appears to have recently changed. Compare a recent product search on Amazon for the same product search a few weeks ago:

amazon-search-then

In this product search from several weeks ago, you can see many separate listings of the same product. [click to enlarge]

In a more recent search for the same product, only one listing appears. From that page, you can select other sellers to purchase from.

In a more recent search for the same product, only one listing appears. From that page, you can select other sellers to purchase from.

Amazon very recently altered this (a couple of weeks ago), and now only displays the one (correct) product at the top of their search results; however, this also appears to give a strong and exclusive bias to the brand owner of the product.

This is unfortunate as I now only seem to get one price (from the brand itself), and it is clearly not the best price. For me, it degrades the user experience, as I don’t seem to be able to get the best price or many options from other sellers (which is my understanding of the definition of a marketplace).

As local business are all entities and have associated products or services, the impact of trust clearly has an equivalent effect and plays a strong role here. An example is depicted below for a search for a specific product.

Search for "4 slice toaster oven near me"

Search for “4 slice toaster oven near me”

It is also very well known that results from trusted review sites often dominate organic SERPs in local search today, with Yelp as a well-known example. This means this applies to professional services and all other kinds of local businesses, forming the basis for a large part of the user’s “trust” in that business entity and/or the products or services they offer.

Critic Reviews And Trust

Looking at this in another vein, Google recently started advising users to promote their content using Critic Reviews, and they suggest adding review markup to any of the following pages:

The page can be any of the following:

  • The website’s home page
  • A page linked from the home page that provides a list of available reviews
  • A page that displays an individual review.

They provide an example for promoting Critic Reviews for the movie “Gravity” and specify that the preferred format is JSON-LD (although they do state that they accept microdata and RDFA as alternative syntaxes). For examples of the microdata format, they recommend looking at schema.org/review.

Critic reviews – Sample markup in json-ld for the movie Gravity

Critic reviews – Sample markup in json-ld for the movie Gravity

Google in fact put out a terrific video on the topic of Critic Reviews. A snapshot below illustrates how the schema.org markup added for these reviews appears on your mobile device.

As Google clearly states here, these snippets help users make decisions and also introduces them to new, authoritative sources (whom they now presumably trust).

Crtitial Review Snippets on mobile

Crtitial Review Snippets on mobile

 

The standard set of attributes for Critic Reviews is well defined on the post, and there are also additional requirements for four specific Critic review types: movies, books, music albums and cars.

Promote Your Content With Structured Data

As an SEO, you should work to make your code “machine-friendly” and add relevant structured data to your pages using schema.org where applicable. As Google states very clearly here, doing so will give you a better chance of achieving enhanced displays (rich snippets) in search results, as well as a presence in the knowledge graph.

If you can, go one step further than specified in the blog by adding identifiers where possible. Focus primarily on Freebase and Wikidata IDs. I illustrated how to find a freebase ID in a previous article by locating the “MID” (Machine Identifier), and I also discussed how to facilitate the search engines disambiguating your content using the “sameAs” predicate in schema.org.

I would also recommend obtaining the wikidata identifier (or “QID”), which you can find quite easily on Wikipedia by going to the URL of the entity and then clicking “wikidata item” in the left-hand navigation.

wikidata-item

I would like to end this article with a screenshot from the video that I could not resist, as it makes a very clear statement. Structured Data allows Google to answer some really hard questions, and Google clearly loves the ability to do so. This means if you want to make Google happy, be sure to add structured data to your web pages.

Structured Data Lets Google answer some really hard questions

Structured Data Lets Google answer some really hard questions.

Takeaways

  • Mark up all your pages with relevant schema.org markup; if reviews apply, make doubly sure to mark them up, since they add a trust indicator.
  • Add identifiers wherever possible (MIDs and QIDs).
  • If you are running an e-commerce-type marketplace and are interested in “landing pages,” make sure you uniquely identify your products to ensure that your review volumes are maximized and that you do not lose link equity for those pages.
  • If you are a brand site, make sure to add reviews to your product page, along with your unique identifier, to ensure your appropriate recognition as the “official website,” typically in position 1 in organic SERPs (Other factors may alter this, of course).
  • If you are promoting some form of media that supports critical reviews (video, movies or music, or a product like cars), be sure to add markup to those pages.

The post How Trust & Unique Identification Impact Semantic Search appeared first on Search Engine Land.

]]>
Structured Data & The SERPs: What Google’s Patents Tell Us About Ranking In Universal Search /structured-data-serps-googles-patents-tell-us-ranking-universal-search-219205 /structured-data-serps-googles-patents-tell-us-ranking-universal-search-219205#respond Fri, 29 May 2015 14:30:58 +0000 http:/?p=219205 Columnist Barbara Starr delves into several Google patents to explore the ways in which the search giant is displaying search results based on structured data and context.

The post Structured Data & The SERPs: What Google’s Patents Tell Us About Ranking In Universal Search appeared first on Search Engine Land.

]]>

The use of structured data is now increasingly apparent in many aspects of search — but perhaps nowhere is it more evident than in today’s search engine results pages.

Search engine results pages have evolved considerably over the years. We’ve seen a shift from the classic “10 blue links” to an information-rich display that blends many different types of results. In addition to the standard organic search results we all know and love, we’re also seeing knowledge panels, image results, local packs, Google news, and more — each of which has its own unique algorithm for determining placement within these areas.

Google’s shift towards these “blended” search results that include Knowledge Graph-based information has had a marked effect on the search engine optimization (SEO) community. Not only do we need to start incorporating structured data into our SEO strategies, but we need to have an understanding of what factors determine which content gets displayed in different areas of the search engine results pages.

Today, I’m going to delve into some Google patents to help give you a better understanding of how the search giant is thinking about the display of search results based on structured data and context.

Ranking & Ordering Via Entity Metrics

A recent patent of Google’s, “Ranking search results based on entity metrics,” discusses the ways in certain metrics might be used by a search system (e.g. Google Search) to rank and order results.

The patent starts out by describing how a search engine algorithm works: It looks at a variety of metrics (what we typically refer to as “ranking factors”), then computes a relevance score based on a weighted sum of these metrics to determine placement within search results.

The patent also notes that “ranking search results may be distinct from ordering search results for presentation.” In other words, ranking is an internal measurement based on relevancy, whereas ordering refers to how search results are presented on a page.

So, what does this have to do with structured and entity search?

Well, the patent then goes on to describe how, in some instances, search results are based on information found within “data structures.”

In some implementations, search results are retrieved from a data structure. In some implementations, the data structure also contains data regarding relationships between topics, links, contextual information, and other information related to the search results that the system may use to determine the ranking metrics. For example, the data structure may contain an unordered list of movies, along with the awards and reviews for each respective movie. The search system may use the awards and reviews to determine a ranking of the list, and may present the search results using that ranking.

In other words, information from various external data sources (such as Wikidata, a repository of structured data that helps to power Google’s Knowledge Graph) as well as structured data within your website could be used to determine search engine results page placement.

Entity-specific metrics might be used to enhance and refine this ranking/ordering process. In particular, the patent discusses four entity metrics: a relatedness metric, a notable entity type metric, a contribution metrics, and a prize metric. (Note: The patent also indicates that these 4 metrics are illustrative examples, meaning that others may also potentially be used.)

The four illustrative entity metrics are described as follows:

1. Relatedness Metric 

The relatedness metric looks at the co-occurrence of an entity and its “entity type” on web pages. An “entity type” is generally a categorization or defining characteristic of an entity — for example, George Washington is an entity, of the entity type “US Presidents.”

[W]here the search query contains the entity reference ‘Empire State Building,’ which is determined to be of the entity type ‘Skyscraper,’ the co-occurrence of the text ‘Empire State Building’ and ‘Skyscraper’ in webpages may determine the relatedness metric.

In other words, when you type in a search query, Google may determine that a web page is more or less related to that query based on what other, related words are included on the page.

2. Notable Entity Type Metric

The notable entity type metric refers to the fact that an entity may be categorized under many different entity types, some of which are more “notable” than others — for example, Barack Obama could be categorized as an Author, Politician, Public Speaker and Celebrity, but he is most notable for being a U.S. President.

barack-obama-freebase

 

The notable entity type metric also takes into account that multiple entities can be of the same entity type, so one in particular may be the most relevant to a searcher. For example, both George Washington and Barack Obama are of the entity type U.S. Presidents — but a Google search for “us president” yields a direct answer containing Barack Obama.

us-president-google-search
In some implementations, the value of the notable entity type metric is a global popularity metric divided by a notable entity type rank. The notable entity type rank indicates the position of an entity type in a notable entity type list.

 3. Contribution Metric (And Fame Metric)

In some cases, the contribution metric is based on critical reviews, fame rankings, and other information. In some implementations, rankings are weighted such that the highest values contribute most heavily to the metric.

It is no surprise that Google may have discovered the power and potential of something like a contribution metric and then applied that to other domains leveraging context. These are a couple of other interesting tidbits regarding reviews that the patent provides which are stated as follows:

  • “[I]nformation for determining a contribution metric may include social media, news sources, research publications, books, magazines, professional and user reviews on commerce websites, e.g. Amazon product reviews, professional and user reviews on dedicated reviewing sites, e.g. restaurant reviews on Yelp, user reviews on industry or domain specific sites, e.g. movie reviews on IMDB, any other suitable source of information, or any combination thereof.”
  • “[T]he search system may combine professional critic reviews and user reviews of restaurants, giving more weight to the professional reviews and less weight to the user reviews.”

The Fame Metric

A sub-metric of the contribution metric, the fame metric takes into account all the contributions of a particular entity. “For example, the fame metric of a movie actor may include a summation of the contribution metrics of that actor’s movies.”

Check out the search engine results page below for actor Tom Hanks. You can see below that the “contributions” involved in calculating this fame metric (in this case, his movies) are displayed prominently in the Knowledge Graph Panel in its own dedicated area, as mapped to the knowledge panel template in Google’s patent, “Providing Knowledge Panels With Search Results.”

tom-hanks-google-search

A screenshot of the Google search results page for “tom hanks.”

google-patent-fig-5b

FIG. 5B is a screen shot of an example search interface in which a knowledge panel is presented with search results. From Google’s patent, “Providing Knowledge Panels With Search Results”

4. Prize Metric

The prize metric is based on an entity’s awards and prizes. For example, a movie may have been awarded a variety of awards such as Oscars and Golden Globes, each with a particular value. In some implementations, the prize metric is weighted such that the highest values contribute most heavily to the metric.

The patent provides strong evidence that semantic web technology is being used as background context for the definitions of the metrics and the environment in which they are framed.

Different Algorithms For Different Screen Areas

There are many interesting elements to the patent, and the last I wanted to address is Figure 3.0 below.

Figure 3 - Ranking Entity Metrics in Search Results Patent

Figure 3 – Ranking Entity Metrics in Search Results Patent

At first glance, it looks very innocuous, like an image of standard search results with a bunch of links. You find those sort of diagrams in many search patents. However it is accompanied by a very intriguing explanation of the figure, part of which reads as follows:

It will be understood that the presentation of search results in user interface 300 is merely an example and that any suitable presentation of any suitable results may be used. In another example, results may be image thumbnail links, ordered horizontally based on score . In another example, search results may include elements of a map and the search system uses score -* to determine which elements to present on the map.

What is interesting here is that it seems that specific regions of the search results are defined or templated in some manner, and ranking/ordering for each varies by context or domain. (Have you noted those fine lines on the screen demarcating or separating results in your search results?) From an SEO point of view, this means that optimizing a company’s website or web presence will be based on targeting these templates, each of which may well have their own ranking algorithm based on context.

As further food for thought, I would like to close with the diagram below, which shows an image from a patent on context, “Maintaining Search Context,” compared to a Google search engine results page for “golden retriever.”

Figure 10 - "Maintaining Search Results Context Patent" - and Google search results mapping

Figure 10 from Google’s “Maintaining Search Context” patent, compared to Google search results for “golden retriever.”

FIG. 10 shows user interface 1000, [which] includes exemplary content displayed in response to receiving a search query “Dog Breeds.” In some implementations, the search system displays related entity area links in the related entity area 1002, […] including “Bernese Mountain Dog,” “Poodle,” Golden Retriever,” “German Shepherd,” and “Greyhound.” The search system displays search results related to the query “Dog Breeds” in a search result area 1026. The search system displays information related to the entity “Dogs” in an information area, for example information area 1030. Information area may include links to other types of entities such as information area links 1032 to entities of the type “dog breed” and information area links 1034 to entities of the type “Service Animals.”

As you can see, different areas of the screen correspond to different result sets for the same query, presumably each with their own distinct algorithm for ranking and ordering information.

Takeaways/Summary

With the increasing shift from keyword search to entity search — and with the increased growth and usage of Knowledge Graph Panels and other data-based displays — comes the corresponding shift in the direction of SEO.

Ordering of items and ranking of information driven by a need for a positive and personalized user experience means that different algorithms apply at different times. These algorithms are based not just on traditional ranking factors that assess relevance and authority, but also by how data may be optimally visually displayed for various device types and screen sizes.

The post Structured Data & The SERPs: What Google’s Patents Tell Us About Ranking In Universal Search appeared first on Search Engine Land.

]]>
/structured-data-serps-googles-patents-tell-us-ranking-universal-search-219205/feed 0
Understanding Question/Query Answering In Search & How It Relates To Your Website /understanding-questionquery-answering-relates-website-digital-assets-207193 /understanding-questionquery-answering-relates-website-digital-assets-207193#respond Fri, 07 Nov 2014 14:28:57 +0000 http:/?p=207193 As Google delves further into semantic search, our focus must shift from targeting keywords to answering questions. Columnist Barbara Starr shows how you can use structured data to help search engines understand your content.

The post Understanding Question/Query Answering In Search & How It Relates To Your Website appeared first on Search Engine Land.

]]>

Search engines are evolving. Search is not only becoming faster, it’s becoming more predictive and conversational — more like a personal assistant.

In the old days, search engine results pages (SERPs) presented little more than a collection of 10 blue links — the results of a search over web documents. These listings typically consisted of the URL along with a “snippet” of text and perhaps some other information. Search engines became quite adept at determining and displaying relevant and readable snippets.

As search engines began to incorporate search over data into their algorithms, results pages evolved accordingly. SERPs now feature engaging displays that include knowledge panels, answers to questions, rich snippets and more. In some cases, these enhanced listings are even actionable — such as a phone number on Google, which you can click to call (via mobile or Google Hangouts).

With the shift from search-over-documents to search-over-data comes a new set of challenges for search marketers. How can we create interesting and relevant content that will both surface in SERPs and also elicit a desired action?

The solution lies in determining what kinds of questions your target audience is asking and how to supply the content for each type of question.

What Kind Of Questions Can Your Content Answer?

It is possible to address many question types (as I have mentioned in previous posts that discuss form-based, or parameterized, queries).

However, to illustrate the overarching concept of optimizing content to answer questions, this post will only delve into a few specific examples. (If there are any specific types of queries or questions you are interested in knowing about, feel free to request it as a topic for Search Engine Land in the comments below or contact me.)

Here, I will address queries that deal with contact information as well as product search queries as they relate to Google Shopping.

Query Types That Ask For A Phone Number

Google provides very clear instructions for using structured data markup to get your company’s phone number(s) included in search results.

In the examples below, you can see that Organization and ContactPoint are the two Schemas defined as a minimum requirement in order for this to occur.

Specifying Corporate Contacts with JSON-LD

Specifying corporate contacts with structured data markup

In the example on the above right, you can see how extended markup is used to specify different types of business numbers, such as “technical support,” “customer service,” “bill payment” and so on. There is also a means to deal with “accessibility” here, such as specifying a separate phone number (contact point) for hearing impaired users.

Once implemented, you can test your markup for this in Google’s corporate contacts markup tester tool here.

Presenting Contact Information Using Tables

Of course, there are other ways to provide Google with structured data about your contact information. For example, Google recently announced the introduction of “Structured Snippets,” in which relevant information is extracted from tables on a web page and displayed in search result snippets.

If you have multiple contact points, it may behoove you to place all of that information in a well-labeled web table. Below is an example of some search results that clearly leverage this implementation.

Structured Snippets Derived from Tables

Structured Snippets Derived from Tables

Product Search Queries/Questions

For those in the retail space, it is worth examining the kinds of queries that Google may be leveraging for Google Shopping. This now falls largely under SEM rather than SEO, since Google now requires merchants to pay to be included in results, but it is important to understand how Google is determining relevance.

Although there are many form-based queries shoppers use that are pretty easy to distinguish as product searches — examples would be “black dresses under $X.00” and so on — we can take this a step further by looking at Google’s U.S. Patent application 20140143254, called “Category and Attribute Specifications for Product Search Queries,” which describes the queries a user may enter, and the way a search engine might determine the direct correlation between those queries (questions) and the corresponding product category.

In its application, Google uses the example of the query [50 mm ef lens]. Though the query doesn’t specify the product type, a method could be used to infer that the user wants camera lenses — and, in addition, that “the desired focal length is 50 mm and the desired mount type is Canon EF.”

By contrast, a query like [acuvue lens] also uses the ambiguous term “lens.” But, by pairing that with the other term — the brand name Acuvue — the search engine could determine that contact lenses were the appropriate category in that case.

Diagram from Patent Application

Diagram from Patent Application. Boxes 160, 150, 170 and 180 are used to determine the appropriate category when a query is ambiguous.

Each product category (once mapped) would have its own SERP type with a GUI similar to a landing page. When Google is highly confident it has figured out what type of product a user is looking for, it offers options for narrowing the search in the left sidebar, similar to what you might see on an e-commerce site like Amazon.com.  (However, it is provided by Google Shopping and not an e-commerce site on the web.)

This is illustrated in part in the diagram below. The diagram depicted on the left-hand side is derived from the Google patent. The image on the right-hand side is from the same query run live in Google Shopping as of November 2, 2014. The corresponding mappings are depicted in the diagram with the commonalities illustrated in point.

Mapping to diagram in Google Shopping Patent entitled "Category and Attribute Specifications for Product Search Queries""

Mapping to diagram in Google Shopping Patent entitled “Category and Attribute Specifications for Product Search Queries””

Looking at the details of this, one can see that the “Answer” to the “Product Query” is, in fact, the category itself (the most granular and most specific that can be determined) and the “category page displayed” as the “Answer” to the query is the one portrayed above. That page is illustrated in the above example on the right-hand side.

To put it in SEO terms, the product category generates an interim page (landing page) that covers all future questions that could be asked about that product category, with the attribute selectors (faceted search) specified on the left-hand side for further drilldown.

This explains why Google wants the product category so well and accurately specified (to the most granular definition possible in terms of its product taxonomy hierarchy) in the product feed advertisers/merchants submit.

This would imply the “answer” to any product query is, in fact, stipulated in the product feed itself (assuming the product category field is correctly mapped when submitted by a merchant).

Here are some excerpts from Google’s Merchant Support pages that shed light on how it looks upon the matter:

  • Use The Full “Breadcrumb” Information. Wherever possible, we recommend you include more granular categories as it provides us with more precise classification signals. For example, Books > Non-Fiction > Sports > Baseball is better than Baseball. You must use “>” as a separator, including a space before and after the symbol. If you submit multiple values separated by a comma, only the first value will be used as the ‘product type’ for your Shopping campaigns.
  • Use The Most Specific Category Possible. Being specific will help improve your data quality and, in turn, ensures that offers are presented in the right category. When viewing the taxonomy, try to navigate to the most specific category that still covers all the products being submitted. For instance, if you are submitting MP3 player chargers use the category MP3 Player Chargers. Conversely, if you are submitting multiple types of chargers use the category Chargers.

It is worth noting that Google updates its category taxonomy every three months. Though it still supports categories published in past taxonomies back to August 2011, it stands to reason that abiding more closely to the current taxonomy would be beneficial.

Takeaways/Other Question Types

There are many other types of queries that can be asked of Google that typically take specific formats, both in terms of the query templates and the results presentation templates.

A few means of supplying structured data to Google are as follows:

  • Structured markup on your web pages using schema.org
  • Structured information in the form of tables (ideally with some semantic alignment to schema.org)
  • Information in the Linking Open Data cloud tends to be fertile ground for seeding knowledge graphs. Try to provide strong and consistent signals by placing relevant, high-precision information in locales such as Freebase, Wikidata and other such places.
  • Leverage items such as APIs, data feeds, Atom feeds and other similar formats to ensure timely updates of your information and to ensure consistency of the signals you send across all sources that search engines may reference (or cross reference or corroborate) .
  • Leverage all forms and mechanisms that may entail structured information representation, such as inherent structure in the HTML DOM or more specific structure in HTML5.

Bear in mind, all of these mechanisms need to send consistent signals to the search engines in terms of both the type and quality of content you are providing.

With the shift from search-over-documents to search-over-data, when search engines provide an answer to a question or some equivalent function, they need to be able to assert their answer to a question with high confidence by leveraging high precision data internally. Ensure you can provide consistent high-precision data in as many ways as are appropriate.

The post Understanding Question/Query Answering In Search & How It Relates To Your Website appeared first on Search Engine Land.

]]>
/understanding-questionquery-answering-relates-website-digital-assets-207193/feed 0
Demystifying The Google Knowledge Graph /demystifying-knowledge-graph-201976 /demystifying-knowledge-graph-201976#respond Tue, 02 Sep 2014 17:20:48 +0000 http:/?p=201976 Columnist Barbara Starr explains the power of the newly extended Knowledge Graph and how to leverage semantic technology for better search visibility.

The post Demystifying The Google Knowledge Graph appeared first on Search Engine Land.

]]>

Search is changing — and it’s changing faster than ever. Increasingly, we are seeing organic elements in search results being displaced by displays coming from the Knowledge Graph.

Yet the shift from search over documents (e.g. web pages) to search over data (e.g. Knowledge Graph) is still in its infancy.

Remember Google’s mission statement:

Google’s mission is to organize the world’s information to make it universally accessible and useful.

The Knowledge Graph was built to help with that mission. It contains information about entities and their relationships to one another — meaning that Google is increasingly able to recognize a search query as a distinct entity rather than just a string of keywords. As we shift further away from keyword-based search and more towards entity-based search, internal data quality is becoming more imperative.

In other words, if you want to be findable in the search results of the future, search engines need to be able to understand what entities are on your web page.

Search engine technology can extract entity information from your content in two ways — explicitly (using structured data markup), or implicitly (using natural language).

Extractingentitiesfromwebpage

Explicit vs. Implicit Entities

Explicit Vs. Implicit Entities

Explicit entities are obtained when search engines consume structured data markup on a web page, leveraging semantic web technology to do so.

Implicit entities refer to when entity information is derived or inferred from the text on a web page. The technology utilized to obtain these entities is typically some sort of stochastic algorithm like NLP (Natural Language Processing) or a similar form of information retrieval technique

(Check out my last article for list of tools to assist in determining both what implicit and explicit entities are on your webpage.)

Takeaway: Make sure both your implicit and explicit entities send the same “signal,” i.e., are about the same topic, thereby strengthening the positive signal about that topic you send to search engines.

Schema.org Extends The Knowledge Graph

For those of you who are concerned about “getting into the Knowledge Graph,” placing structured data markup from Schema.org on your web pages is critical.

Schema.org hosts a collection of structured data markup schemas that help search engines understand the information contained within HTML documents. This structured data allows search engines to identify entities and define the relationships between them — which in turn leads to better, richer and more useful search results.

chemadefinesrelationships

Schema.org defines (amongst other things) relationships

In other words, becoming an authoritative resource on something and marking up your page with the appropriate structured data markup can increase your chances of “getting into the Knowledge Graph.”  What follows is an example of how to get your events into Google’s Knowledge Graph using Schema.org and JSON-LD.

How To Get Your Events Into Google’s Knowledge Graph

Note: This step-by-step guide/example is taken directly from the Google I/O tutorial (which can be found here) as well as one or two screenshots from the “Cayley” tutorial from Google I/O 2014 (which can be found here).

As I have already mentioned, understanding entities helps Google understand what “things” are in the world and what users are searching for. The Knowledge Graph leverages information from authoritative sources such as Wikipedia, Freebase, Google Maps, the FDA, and so on.

In the case of events, the best and most authoritative source of event information is the event organizers themselves. A search engine is therefore incentivized to surface an authoritative answer from the event organizer’s official website.

As you can see from the illustration below, the Knowledge Graph results for “Keith Urban” contain a list of upcoming shows. The structured data from the official website is consumed by the search engine, and that information is then stored directly in Google’s Knowledge Graph.

structured-data-keith-urban

Event organizers are thus requested to take ownership of their events and mark up their websites accordingly. When Google crawls the web, it can then read the markup and show the results to users when they are searching for that information.

As you can see, the Knowledge Graph powers a lot of information and results surrounding events. In this case specifically, we see the Knowledge Graph powering:

  • Knowledge Graph entries in SERPs
  • Event Listings in Google Maps
  • Notifications in Google Now
structured-data-events

Schema.org is the vocabulary or ontology of choice for specifying this information, as it is supported by the major search engines, namely Yandex, Yahoo, Microsoft and of course, Google.

In terms of the syntax you utilize to mark up your event listings, it can be done in either microdata or JSON-LD, whichever you prefer. Decide which syntax you want to use and stick with it. (In my personal opinion, JSON-LD is easier.)  Once you pick your syntax, make it consistent throughout your web page (and preferably website, depending on what kind of information you are marking up).

microdatajsonldevents

Microdata (left) and JSON-LD (right) for schema.org type MusicEvent

For the type of the event, ensure that you the most specific subtype, such as “SportsEvent” (or whatever event type you are marking up). For specific event subtypes, you can add useful attributes such as home team and away team. In the example below from the Keith Urban website, you can see the event type listed as “MusicEvent.”

JSONLDplacedonOfficialsite

JSON-LD on the official artist website

For the “offer” information, add the URL of the ticket seller’s web page. At this point, Google will try to visit the ticket seller and check for the corresponding information. An example of how that webpage should be marked up is shown below.

jsonldforofficialticketorsite

JSON-LD for the Official Ticket Seller’s Website

As you can clearly see, the offer is marked up, along with the price information, dates it is valid, the availability of the ticket (“InStock”) and the website offering the ticket.

Once you’ve implemented your markup, the next step is to ensure it is valid. For events, you can do that with Google’s Events Markup Tester page. (For other types of structured data, check out Google’s structured data testing tool.)

EventMarkupTester

Google’s Event Markup Testing Tool

So there you have it. In summary, Google will crawl the official website, check the website for the official ticket seller for the event, store that information in the Knowledge Graph, and then be able to display it in search results when users are seeking that data.

For those of you that have events to mark up, enjoy doing so; for those of you who are working in domains that do not involve events, understanding the process is still helpful, as it is similar for other types of structured data markup implementation.

Things, Not Strings

As an additional takeaway, I would recommend thinking of entities just as they are: “things, not strings,” as Google puts it. The future of search is moving away from the idea of “keywords,” and notions about “keyword density” have no place in the future of SEO. Note the following recent statement from Google’s  research blog:

Now, with the Knowledge Graph, we are beginning to think in terms of entities and relations rather than keywords. “Basketball” is more than a string of characters; it is a reference to something in the real word which we already know quite a bit about.  Background information about entities ought to help us decide which of them are most salient. After all, an article’s author assumes her readers have some general understanding of the world, and probably a bit about sports too. Using background knowledge, we might be able to infer that the WNBA is a salient entity in the Becky Hammon article even though it only appears once.
Research Scientist Dan Gillick and Product Manager Dave Orr

In other words, as semantic technology gets more sophisticated, you may only have to mention the concept once, and the remainder can be deduced by the search engines. Thus, natural language will become more and more the norm as search engines become better at identifying implicit entities. Keyword stuffing will be — and in fact already is — a technique of the past.

Key Takeaways

In summary, there is a lot you can do to optimize a web page for the Knowledge Graph. My primary strategy, as discussed in this piece, is outlined here:

  • Determine what entities you want to target.
  • Determine what topics are of interest to your audience.
  • Send a strong signal your site is writing about that topic to the search engines by using structured markup (explicit entities).
  • Corroborate the information and strengthen that signal with the content you place on your site (implicit entities).
  • Remember: entities are not keywords, so do not treat them as such.  A mere mention in the appropriate  context can be a powerful signal.

The post Demystifying The Google Knowledge Graph appeared first on Search Engine Land.

]]>
/demystifying-knowledge-graph-201976/feed 0
Use These Tools To See What Entities Are On A Web Page /see-entities-web-page-tools-help-194710 /see-entities-web-page-tools-help-194710#respond Fri, 27 Jun 2014 14:10:58 +0000 http:/?p=194710 Search engines leverage structured data to determine what entities are on your web page. They can also do this using other techniques such as natural language processing (NLP) and machine learning. This article will introduce you to various tools that will help you identify entities on a web page. These tools include: Interesting /Chrome plugins/extensions […]

The post Use These Tools To See What Entities Are On A Web Page appeared first on Search Engine Land.

]]>
Search engines leverage structured data to determine what entities are on your web page. They can also do this using other techniques such as natural language processing (NLP) and machine learning.

This article will introduce you to various tools that will help you identify entities on a web page. These tools include:

  • Interesting /Chrome plugins/extensions that enable you to see entities derived from structured data/markup as you surf any web page
  • A fantastic visualization tool for entity graphs which can be run locally on your computer
  • Tools that extract entities on a web page using primarily natural language processing (NLP) techniques
How to see your web pages like a search engine does

How to see your webpages like a search engine does

Useful Chrome Plugins/Extensions

There are several Chrome plugins that are extremely helpful in understanding (and actually seeing) what structured data is on a web page. Illustrated below are the Chrome extensions I use, listed under the keyword used to locate them via Chrome Web Store search.

MetadataMicrodataogextensionschrome

Here are links to each extension:

There are many advantages to utilizing these plugins. For one, they really give you a good feel for who is using what markup on their websites as you surf the web on a daily basis. When you see those little microdata and structured markup icons appear in your browser, you need only click to instantly see what kinds of markup and meta data are on a page.

Another advantage is that some of these extensions crawl JavaScript, so you can see all kinds of interesting information that you could not see were you to run those same pages through Google’s rich snippets testing tool.

Attempting to display the amount of information provided by all these extensions in a single screenshot is not possible, so I’ve opted to include just a few examples, broken down into several screenshots.

(Note: The fact that there is too much info to display in a single screenshot is indicative of the growth of structured data on the web since I last wrote on the topic 2 years ago. The volume of information that is available on the average web page, by comparison to 2012, has increased by orders of magnitude.)

The three screenshots below provide a sampling of the kinds of information gleaned via the microdata extensions. (All can be enlarged by clicking.) While the info is fairly similar across all 3 extensions, it’s nice to have several tools available in case one picks up something the others have missed.

microdata.inspector plugin used on a walmart product page

Information provided by the Microdata inspector extension. (Click to enlarge.)

Information provided by the Microdata/JSON-LD sniffer extension (Click to enlarge)

Information provided by the Microdata/JSON-LD sniffer extension. (Click to enlarge.)

Information provided by the Semantic Inspector extension (Click to enlarge.)

Information provided by the Semantic Inspector extension. (Click to enlarge.)

All three plugins have identified schema.org Product markup, including properties for image, name, brand, manufacturer, model, product ID, offers and description. They’ve also identified the on-page markup for reviews and ratings.

Marketers looking to implement their own structured markup might be most interested in the Microdata/JSON-LD sniffer extension (middle screenshot above), as it provides the information in a convenient HTML view.

The META SEO inspector gives an even higher-level view of page data:

Information provided by the META SEO inspector extension. (Click to enlarge.)

Information provided by the META SEO inspector extension. (Click to enlarge.)

As illustrated by the screenshot above, the META SEO inspector lets you see all kinds of metadata provided to search engines, ranging from old-fashioned but still utilized metadata tags to schema.org information, Facebook Open Graph, Twitter tools/cards and more.

The last extension I’m going to cover here is called Green Turtle RDFa. This extension provides not only provides a complete listing of subject-predicate-object triples information on a web page, but also a visualization of that information. Here’s a view of the information Green Turtle has gleaned from the Walmart product page we’ve been using as an example so far:

Information provided by the Green Turtle extension. (Click to enlarge.)

Information provided by the Green Turtle extension. (Click to enlarge.)

With the right settings enabled, this tool also extracts microdata. To turn on that feature for this extension once you download it, you need to perform the following actions:

Once you have downloaded the Green Turtle extension into your Chrome browser, go to Tools –> Extensions and find it in your extensions list. Select “Options,” then check the box to Enable Microdata.

Enabling microdata (as well as RDFa) in Green Turtle

Enabling microdata (as well as RDFa in Green Turtle

Now that you have enabled both RDFa and microdata parsing for the Green Turtle plugin, you should be able to see much information. Check out the new results for that same Walmart product page:

Green Turtle information with microdata enabled. (Click to enlarge.)

Green Turtle information with microdata enabled. (Click to enlarge.)

Structured Data Visualization Tool: Gruff

Gruff is a tool that is downloadable for free (Mac or PC) and allows you to visualize what structured data (or triples — data entities composed of subject-predicate-object) are harvested from a web page. The graphic below (extracted from a recent Search Engine Land article I wrote) will give you an idea of the type of information Gruff can give you.

illustrations of structured information retrieved from Gruff – running it over a Searchengineland article

Illustrations of structured information retrieved from Gruff, running it over a Search Engine Land article. (Click to enlarge.)

To use Gruff, you must first download it here. To run it locally and use the simpler installation, I would recommend downloading the 3.3 version (you will see both when you select the download option).

Once Gruff is installed, you will need to create a “New Triple-Store” under the File menu. Once completed, you can then extract web page data by going to File –> Extract Microformat/RDFa Data from Web Page and then entering the URL in the box provided. (Leave the Graph Name field blank.)

SELimage7Gruffusage

When the program has finished extracting the data, go to the Display tab and select the last option, Display Triples of One Graph. This should bring up the data visualization map (as seen above).

Tools That Allow You To Get Information Out Of Text

(See What Entities Are In Your Text – NLP Tools)

TextRazor is an API that analyses text input to determine information about specific entities within that text. With this tool, you can “extract the Who, What, Why and How” from the text of web pages, tweets, emails, etc. To see how it works, check out their demo page and input some text.

As an example, here’s what TextRazor came up with when analyzing the first two paragraphs of one of my previous columns (click the images to enlarge):

TextRazor-one

TextRazor analyzes text and extracts entity information. (Click to enlarge.)

TextRazor-two

When you hover over entities identified by TextRazor, additional information about that entity pops up, including a Wikipedia link and a Freebase ID. (Click to enlarge.)

Other useful tools and APIs for named entity extraction over text include:

These tools can be fun to play with, while providing a helpful understanding of how entities and entity graphs can be derived from both structured and unstructured information sources in a web page.

Key Takeaways

  • Understanding that entities can be derived from structured, semi structured, as well as unstructured information in a page can be enormously helpful in a transition to understanding semantic search.
  • Spend some time simply playing with these tools, and you will find the task and concept is less daunting that it may appear.
  • If it appears complicated, that’s because it is complicated! However, for the purposes of merely utilizing the technology to obtain a vision of what entities are on a page (which is all you need to know unless you want to be a researcher or build the tools), it is simply helpful and fun.

The post Use These Tools To See What Entities Are On A Web Page appeared first on Search Engine Land.

]]>
/see-entities-web-page-tools-help-194710/feed 0
Shopper FOMO – What Is It & Are You Dealing With It Appropriately? /shopper-fomo-what-is-it-are-you-dealing-with-it-189029 /shopper-fomo-what-is-it-are-you-dealing-with-it-189029#respond Wed, 21 May 2014 15:20:33 +0000 http:/?p=189029 FOMO, or “fear of missing out,” is something that is playing an increasingly significant role in today’s digitally connected world. Wikipedia defines it as follows: Fear of missing out or FOMO is a form of social anxiety — a compulsive concern that one might miss an opportunity for social interaction, a novel experience, profitable investment […]

The post Shopper FOMO – What Is It & Are You Dealing With It Appropriately? appeared first on Search Engine Land.

]]>
FOMO, or “fear of missing out,” is something that is playing an increasingly significant role in today’s digitally connected world. Wikipedia defines it as follows:

Fear of missing out or FOMO is a form of social anxiety — a compulsive concern that one might miss an opportunity for social interaction, a novel experience, profitable investment or other satisfying event. This is especially associated with modern technologies such as mobile phones and social networking services.

The web has provided us with more options and possibilities than ever before — but the downside to having a seemingly infinite amount of choices is that we often worry that we’re making the wrong ones, potentially missing out on something better or more exciting.

The Majority Of Consumers Have “Shopper FOMO”

Although FOMO is most often used in a social media context, it relates to Search and Shopping as well. A recent article published results from a study conducted by Compare Metrics, and the e-tailing group came up with some interesting results:

Findings from the study […] indicated that 73% of shoppers express fear of missing out (FOMO) when searching for products online. These consumers believe their searches are often misinterpreted and that “absolute” filters cut out product options they would have otherwise considered.

“That was a massive ‘a-ha moment’ for us,” Roberts said in an interview with Retail TouchPoints. “It really was shoppers not trusting their search results and truly being afraid to select a certain filter. They would ask: If I click this box, what is going to go away that might have been a perfect purchase?”

This prevalent “shopper FOMO” is symptomatic of a widespread lack of accurate and/or sufficient data when it comes to product information — and this lack of data could well be impacting online retailers’ bottom line. A recent survey of over 2,000 adults in the UK found that 42% had “given up on an online purchase because they didn’t have enough information about the purchase.”

Along those lines, consider the following statistics from Capgemini’s recent study, Beyond the Label: Providing Digital Information Consumers Can Trust:

FOMOCapGeminiStudy

Consumers need to trust the information about the product they’re buying. Online retailers need to ensure that they are supplying high-quality data in their product feeds and on their websites if they want to decrease shopper FOMO and increase conversions.

High-Quality Data Improves Faceted Search

Faceted search allows searchers to sift through information by applying filters. It is used frequently online in retail search of late, including almost all comparison shopping engines.

An example of faceted search in Google Shopping is illustrated below, where the facets/filters appear on the right hand side of the item described (a red dress), and the properties that may populate those facets are illustrated on the left hand side.

FOMOFilterPropertyMapping

In order to ensure that your products have high visibility in comparison shopping engines and similar platforms, you’ll want to ensure that your product feeds are filled with high-quality data. This means that your product data should be accurate, descriptive and as complete as possible. Take the following example of how to improve low-quality feed data, from the Google Shopping blog:

google-complete-product-data

In terms of the Google Product Category, you need to pick the most granular (and accurate) option possible. In terms of color, it is best to most closely match the color you specify to those used in the shopping engine’s search filters. In this case, populating the color field with “Lime” does not make sense as Google Shopping does not have that color option. Even if they are smart enough to categorize it as “green,” it still might not show up for broader “green” searches.

The above example is Google specific, but the principle is the same across all comparison shopping engines. Supply rich, valid information about each product in your data feed.

GTINs In Product Feeds

In e-commerce, there is a global product identifier that is leveraged across all major comparison shopping engines and search engines: a GTIN, or Global Trade Item Number. In the U.S., this is better known as a UPC (Universal Product Code). It is also the number above or below the barcode.

The GTIN is a unique number that identifies a specific product. Since this unique identifier is recognized worldwide, it’s a great way for shopping platforms to verify the exact item you’re selling, thus increasing consumer trust and preventing spammy or misleading listings.

Google and Amazon already require sellers to supply the GTIN in most cases, and comparison shopping engines that don’t already are bound to do so in the future to provide a better user experience for its customers. Ensure that you are supplying this data in all your product feeds, as this will maximize your visibility in shopping engines.

Key Takeaway

Having a rich, trusted source of product information so you can accurately populate search filters (with data and information both consumers and search engines can trust) is absolutely essential — and providing this information is a great first step toward mitigating shopper FOMO. Leveraging high-quality data and GTINs in your product feeds leads to greater customer satisfaction, higher conversions and lower ad costs.

The post Shopper FOMO – What Is It & Are You Dealing With It Appropriately? appeared first on Search Engine Land.

]]>
/shopper-fomo-what-is-it-are-you-dealing-with-it-189029/feed 0
How To Use Entity Markup To Appear In The Knowledge Graph /5-ways-optimize-markup-knowledge-graph-semantic-search-186755 /5-ways-optimize-markup-knowledge-graph-semantic-search-186755#comments Fri, 21 Mar 2014 17:00:59 +0000 http:/?p=186755 Search engines have increasingly been incorporating elements of semantic search to improve some aspect of the search experience — for example, using schema.org markup to create enhanced displays in SERPs (as in Google’s rich snippets). Elements of semantic search are now present at almost all stages of the search process, and the Semantic Web has […]

The post How To Use Entity Markup To Appear In The Knowledge Graph appeared first on Search Engine Land.

]]>
Search engines have increasingly been incorporating elements of semantic search to improve some aspect of the search experience — for example, using schema.org markup to create enhanced displays in SERPs (as in Google’s rich snippets).

Elements of semantic search are now present at almost all stages of the search process, and the Semantic Web has played a key role. Read on for more detail and to learn how to take advantage of this opportunity to make your web pages more visible in this evolution of search.

Under the Hood of Semantic Search

Under the Hood of Semantic Search

Although there has been some argument within the academic community that the Semantic Web “never happened,” it is blatantly clear that Google has adopted its own version of it. Other search and social engines have as well — I wrote an article back in September 2012 discussing how search and social engines are adopting the Semantic Web and semantic search, and gave a timeline of the adoption of semantic search by both the search and social engines.

It was very apparent, even then, that the search engines were moving in the direction of becoming answer engines, and that they were increasingly leveraging the Semantic Web and semantic search technology.

Schema.org Can Be Used To Extend The Knowledge Graph And In Knowledge Graph Panels

It was also clear at the time that Google was using schema.org to extend the knowledge graph. This was clearly illustrated at Google I/O in May 2013, when the Knowledge Graph was only in its infancy.

There, Google execs discussed their focus on answering and anticipating questions, as well as conversational search. The Hummingbird announcement several months later reinforced this new direction and showed that the Knowledge Graph project has been a roaring success thus far.

Knowledge Graph Extended With schema.org from Google i/o 2013

Knowledge Graph Extended With Schema.org markup, from Google I/O 2013 (Click to enlarge.)

Google I/O 2013

Google I/O 2013

Prior to the advent of Hummingbird, we already saw semantic search techniques being used increasingly at every stage of the search process. At a higher level, you can define them approximately as:

  1. Constructing The Query: Google has shifted toward a better understanding of natural language, form-based or template-based queries, which makes search less reliant on users inputting precise keywords in order to return the results they’re looking for. They are improving on their ability to refine, understand and disambiguate user intent.
  2. Retrieving Relevant Information Resulting From The Query: Google is increasingly producing answers in addition to results. They’ve also become adept at understanding and suggesting relevant entities to a given query..
  3. Presenting The Results To The User: Google has tested various ways of presenting the data, documents and appropriate information summaries to the user in an engaging manner. Blended search results, Knowledge Graph entries, and rich snippets are all examples of this.

To Harness The Knowledge Graph, Teach It What “Things” Are On Your Site

The phase of the search process that occurs prior to the actual query is, of course, the indexing and analysis of content (web documents or datasets such as Freebase).

The goal of indexing is really to speed up answer presentation, and it now goes as far as pre-extracting and disambiguating entities (or identifying entities); thus, adding semantic markup to your web pages, where relevant is a must for on-page optimization. 

knowledgeunderstandgraph

Knowledge Graph: Understand/Uniquely Identify Every “Thing” In The World

The key to understanding semantic search is identity. Google’s knowledge graph initiative was intended to give an identity to every “thing” — or entity — in the world. This identity includes facts about the entity, as well as its relationships to other entities.

The purpose of creating these identities is so that search engines can better understand user intent for ambiguous search queries. (For example, should a search for the phrase [black eyed peas] return results for a food or a musical group?)

Understanding user intent is key to going from a search engine to an answer engine — rather than matching your query to keywords on a page, search engines want to understand what you are looking for based on context and provide you with the most relevant answer.

Microsoft has given a fairly concise definition of the entity recognition and disambiguation process:

The objective of an Entity Recognition and Disambiguation system is to recognize mentions of entities in a given text, disambiguate them, and map them to the entities in a given entity collection or knowledge base.

In Google’s case, that means recognizing entities on web pages or web documents and mapping them back to specific entities in their Knowledge Graph.

At this point, everyone is familiar with schema.org. Putting schema.org markup on your pages is a huge help in making them machine readable and assisting search engines; however, it is possible to take this even one step further.

In July of 2013, Freebase made an interesting and important announcement via Google+: 

Announcement of “Sameas” in Schema.org

This means websites should now be marked up to indicate what “entities” they’re talking about in their content — telling search engines that these entities are the “sameAs” those on other sites or entity databases like Freebase.

Let us take a closer look at this. In the Google I/O 2013 talk given by Dan Brickley and Shawn Simister, they illustrated 2 examples of using this “SameAs” property.

The first way is by declaring your schema.org entity (whatever it is) to be the same as some other web page, like Wikipedia for example.  Here is an example:

Schema.org “SameAs” mapping back to a URL

Schema.org “SameAs” mapping back to a URL

The second way is by associating your entity with an ID within a knowledge database, such as Freebase:

Schema.org disambiguated  (using “SameAs”) to a freebase mid for the item

Schema.org disambiguated (using “SameAs”) to a Freebase MID for the item

What Is A Freebase MID & How Do I Find One?

For those of you not familiar, Freebase is “a community-curated database of well-known people, places and things” — in other words, a very large database of entities. Every entity in Freebase is identified by a machine ID (MID), which is basically an assigned ID number. MIDs take the form of [/m/xxxxx], where [xxxxx] represents a string of numbers and lower-case letters of variable length.

Let’s assume you want to look up a MID to help the search engines disambiguate an entity on your page. I will use the example of Danny Sullivan. If I go to Freebase and look him up using the search box at the top of the page, I get the result below:

(Click to enlarge.)

(Click to enlarge.)

As you can see, there are several Danny Sullivans to choose from. I selected the “organization founder,” as that is the Danny Sullivan intended here. You can see that his unique ID in Freebase (or MID) is [/m/0fyf30].

We could thus use his Freebase MID to label him as a specific entity (and disambiguate him from other Danny Sullivans) as follows:

Markup with "Danny Sullivan" disambiguated and mapped to his freebase mid.

Markup with “Danny Sullivan” disambiguated and mapped to his freebase MID.

For a great use case on entity mapping, check out “I Am an Entity: Hacking the Knowledge Graph” on the Moz blog.

Enrich Your Entities For Maximum Visibility

Remember to fully specify every property of your entity for maximum visibility in the search engines, and also to qualify for complete rich snippet displays.

This was further illustrated in Google’s announcement last week regarding event data. You can see here that complete information is imperative, and you can use options ranging from microdata to JSON-LD as specified in this Search Engine Land article.

Complete Information is Imperative.

Complete Information is Imperative.

Duplicate Markup Is Perhaps Analogous To Duplicate Content

Another item worthy of note: structured data is becoming so prolific that there is now a need to identify official listings for that data, especially in certain more popular categories of markup.

Google referenced this in its announcements about adding events to the Knowledge Graph. The danger of replicating events that are not associated with the official listing is now becoming an issue for a company that wants to display only the most official listing in its “knowledge panel” or “answer box.”

 Summary/Takeaways

  • A key concept in the Knowledge Graph is that of identity. Part of the Knowledge Graph initiative is to teach the Knowledge Graph about every entity in the world, giving it a unique “identity” which describes all associated properties and relationships to other entities.
  • You can facilitate disambiguation of your entity in schema.org by using the “sameAs” property.  This property can tell the Knowledge Graph that a particular instance of a “thing” in a web page is the same as that of a Wikipedia URL or Freebase MID.
  • We are heading toward a significant proliferation of markup on the web.  For example, on the web, the same event may be listed on multiple websites. In these cases, the markup from the most official website would be used in the knowledge panel.
  • Information describing an entity in schema.org must be complete and rich.  The same can apply to rich snippets when showing up in the organic section of search results on the page. It is best to populate as many fields as possible with accurate information for any field/property when using schema.org on a web page.

The post How To Use Entity Markup To Appear In The Knowledge Graph appeared first on Search Engine Land.

]]>
/5-ways-optimize-markup-knowledge-graph-semantic-search-186755/feed 4
From 10 Blue Links To Entity SERPs: Is Your Website Ready? /10-blue-links-entity-serps-website-ready-182628 /10-blue-links-entity-serps-website-ready-182628#comments Thu, 30 Jan 2014 17:00:46 +0000 http:/?p=182628 Search is changing and along with it the landscape of search results. SERPs are more adaptive, more engaging, more informative, more interactive and more personalized. The adoption of Semantic Search- and Semantic Web-related enhanced displays in SERPs initially appeared via Yahoo and its SearchMonkey results. This was in due course followed by Google’s Rich Snippets […]

The post From 10 Blue Links To Entity SERPs: Is Your Website Ready? appeared first on Search Engine Land.

]]>
Search is changing and along with it the landscape of search results. SERPs are more adaptive, more engaging, more informative, more interactive and more personalized.

The adoption of Semantic Search- and Semantic Web-related enhanced displays in SERPs initially appeared via Yahoo and its SearchMonkey results. This was in due course followed by Google’s Rich Snippets and further compounded by their Knowledge Graph initiative and its associated altered displays in SERPs.

 

Digital Data Consumed and Processed

Digital Data Consumed and Processed

Today’s Searchers are now very familiar with the “Knowledge panel” and aggregated results in Google’s Knowledge Carousels. At the time of this writing, the impact of Google’s most recent announcement is that we may now see Knowledge Graph information moving into the portion of the screen space previously reserved for organic results.

As Google extends and enhances its capabilities, there are some basic and essential steps you can take to ensure your website can leverage this transition. In effect, it comes down to how to supply the search engines with the proper signals that will enable to them best present your content to your target audience.

Before describing how to leverage Semantic SERPs, it is probably best to proceed with defining them.

Understanding Semantic SERPs

Semantic Search techniques are leveraged by the Search Engines in many stages of the search process. There has been a lot of discussion about how Semantic Search is leveraged to better understand user intent, or transform a query or extract information from a page, but less has been discussed about how semantic search is utilized in other stages of the search process. In this article, I am going to focus on the final stage, in other words, how the semantic results are retrieved (as entities, concepts or objects — however you prefer to think of them) and subsequently displayed to the user.

Semantic Search in many respects is considered synonymous with Entity Search. Entity search is simply the process of searching over a graph of entities (and, of course, getting entity results in return).

In programming vernacular, you can think of it as objects. Different objects have different properties and different actions (or methods or procedures) that can be associated with them, depending on whether the object is, say, a product, place, person, recipe, thing and so on. Given we now have these objects as results, the presentation of these results to the user is thus adaptable and better yet, actionable.  (For example, with an object/entity of type product returned as a result of a search query, an associated action/method/procedure may be to purchase the returned product).

Enhanced Displays in SERPs are based on the type of entity/object returned. Images, reviews, prices, etc., contain properties dependent on the type of entity/object returned. Not only can the content in these objects/entities be used to allow the objects returned to attractively display themselves (as Rich Snippets) in order to improve the user experience, but they can also contain a call-to-action. The call-to-action for an event might be to purchase a ticket for that event. This is also clearly an ideal mechanism for facilitating adaptively-displayed results for different device types and so on.

Along with the shift to a Semantic Search Paradigm, a key transition has taken place. Searches are now frequently conducted as searching over data (i.e., in Google’s case, the internally curated Knowledge Graph) either in place of, or in conjunction with, searching over documents (in this case, the documents referred to are typically web pages).

Searching Over Data

Searching over data will typically generate direct answers to a query — or, if more than one answer is returned a list of results of the same type. The latter lends itself very well to a carousel type of display. Searching over data typically equates to dealing with the Knowledge Graph (if we are referring to Google specifically).

Searching Over Documents

Searching over documents (web pages) tends to result in the display of those documents. In Google, this could mean Rich Snippets are  displayed as those results in SERPS. Rich Snippets are enhanced displays of web page results formed by directly extracting embedded metadata from a web page. (Yahoo initially leveraged them in Search Monkey and Bing with their Bing Tiles.)

The key thing to bear in mind (from an SEO point-of-view) is that you must at least have a minimum amount of structured data included in your webpage in order to potentially enable the display of a Rich Snippet.

The Changing Landscape Of Semantic SERPs

As Search has shifted more and more in the direction of Semantic Search, SERPs have moved towards increasingly rich displays. In the past, I have referred to these as “Entity SERPs.” However, as they are directly related to Semantic Search, and the increasing use of rich and engaging displays has been coupled with the adoption of Semantic Search, a better term to describe them may well be Semantic SERPs. (As compared to dull, static, boring blue links that you cannot do very much with).

Google and the other major search engines are constantly enriching their SERP displays, making them more informative, engaging, attractive and interactive.

If you think of the computer as a blank canvas being taught the notion of concepts (say in schema.org or the Knowledge Graph), it can only deal with the concepts and information it is given. The more information it gets from verified, structured sources, the greater the trust it will have in that information.

Evolving Landscape Of Semantic SERPS

Evolving Landscape Of Semantic SERPS

Rich Snippets Are Getting Even Richer

Rich snippets display far more information and types of information than when initially released. In addition, the information that is displayed is evolving over time. It is now a well known fact that these have a significant effect on click-through rate.

It is critical to understand that Entities must have sufficient information to be represented as a complete entity result. Don’t just mark up enough information for the current “snippets” you see displayed. Make sure your markup is as detailed as possible, using as many schema.org fields as are applicable. Make sure your data is consistent and accurate. As an example, consider the case of product SERPs: make sure items are in stock if you say they are, make sure the landing pages have links that are not stale.

In all cases, make sure that on-page markup is readable by a machine and that the information displayed to a human is identical. In addition, ensure any information you submit in any kind of data feed or via an API matches and is synchronized with all other data you serve up to both the search engines and users alike.

Not only are the displays from Rich Snippets getting richer, they are also constantly in flux. Being aware of this and supplying as much information as possible on a webpage in the form of structured markup provides you with an innate advantage.

Rich Snippets are Getting RIcher

Rich Snippets are Getting Richer (Occupy more Screen Real Estate and are in Flux)

Direct Answers Are Proliferating

Direct answers have been significantly increasing in Semantic SERPs, Google Now and in SERPs in general. They are a direct consequence of “searching over data.” The SEO/SEM community has long noted that answers resulting from a search in Google compete with organic traffic (or clicks through) to your site.

This may indeed be so. Why would a user click through to a site when the information they were seeking appears directly in the SERP?

In some cases, from a business owner’s perspective, the call-to-action is still occurring. This is perhaps more the case in something like informational search (where a phone call may result from a direct answer to a query) than in so-called dictionary lookups, where facts are just aggregated from multiple verified sources.

Proliferation of Answers in SERPS

Proliferation of Answers in SERPS

Grouping Similar Items Or Aggregating Search Results Is A Clear Trend

Aggregated search results, where the entities are of the same type, are ideal for carousel displays. Searches like [San Diego Universities], [restaurants in Atlanta] or [events in Chicago] are examples which yield a carousel type display format in Google SERPs.

The carousel is being leveraged more and more, with sorting options available depending on the type of item being displayed (such as date for events or ratings for restaurants). Further illustrating the utility of carousels, Google announced on Tuesday that it is now showing a date selector in hotel carousels.

In the example below, the Knowledge Carousel is shown in SERPs by Google depicting a list of results for “things to do” (which maps to an object of type “tourist attraction”) in the Knowledge Graph. A different carousel is illustrated when querying for “Events in San Diego.” These events differ in that they are more temporal in nature (as opposed to tourist attractions), and have a start (and presumably an end) time and date specified.

Knowledge Carousel Depicting Events and Things To Do

Knowledge Carousel Depicting Events and Things To Do

The Knowledge Graph Is Now Invading Organic SERP Space

Until recently, the Knowledge Graph results were served up in Google in the upper portion of the screen and along the right hand side of the screen as a “panel.” As mentioned earlier, Google made the announcement last week that it would be including some Knowledge Graph information in organic SERPs. (You can read more about it here.)

The impact of this is clearly illustrated in the screenshot below, depicting an organic SERP listing for the Nikon Coolpix camera. You can clearly see the encroachment of Knowledge Graph results into the previously webpage-only SERPs area. 

Knowledge Graph Results Encroaching on Organic SERP SPACE

Knowledge Graph Results Encroaching on Organic SERP SPACE

It even appears at times that Rich Snippet information is minimized or not displayed in favor of showing a Knowledge Graph snippet. The example below depicts a recipe search for [peanut butter eyeballs]. After the update by Google last week, the ingredients no longer appear as a rich snippet, but the enhanced Knowledge Graph results are appearing in their stead.

Rich Snippets for recipes

Click to enlarge.

It is critical to mark up as much information as possible using structured data markup from schema.org. The search engines can elect to display what they prefer, but marking up as many attributes as possible will position you well for any future changes.

Summary

With the evolution of the search engines in the direction of semantic search, SERPs have correspondingly evolved. Optimizing for SERPs means taking into account the Knowledge Graph as well as semantic markup.

Semantic markup and Knowledge Graph optimization are just a couple aspects of optimizing webpages for display in Google, and these do not preclude typical best practices. Best practices and standards such as engaging content, a positive user experience or optimizing for user intent still have to be adhered to.

Key Takeaways

1. Make sure to keep Googlebot happy — feed it the data it needs in the formats it needs and ensure your information is machine-readable and consistent across all sources.

2. Make sure your on page markup matches what your user sees and also is synchronized with any data feeds you submit.

3. Mark up as much data as possible using schema.org, even those attributes not currently supported. This way, you can position yourself to take advantage of changes and fluctuations in Semantic SERPs.

4. Remember, search is adaptive and interactive; design for a seamless user experience that flows with SERPs presented by the search engines.

5. Knowledge Graph Optimization is more than likely here to stay, so make sure you are prepared to deal with it.

Resources:

The post From 10 Blue Links To Entity SERPs: Is Your Website Ready? appeared first on Search Engine Land.

]]>
/10-blue-links-entity-serps-website-ready-182628/feed 5
Succeeding With Content In A Mobile World /succeeding-with-content-in-a-mobile-world-180869 /succeeding-with-content-in-a-mobile-world-180869#comments Thu, 02 Jan 2014 16:43:19 +0000 http:/?p=180869 In 2013, mobile traffic in the US almost doubled. Cyber Monday of 2013 was the year that cyber Monday went mobile. And, according to Gartner, Q2 of 2013 was the first time that sales of smartphones surpassed sales of feature phones, with the former accounting for 51.8 percent of mobile phone sales worldwide. That last fact […]

The post Succeeding With Content In A Mobile World appeared first on Search Engine Land.

]]>
In 2013, mobile traffic in the US almost doubled. Cyber Monday of 2013 was the year that cyber Monday went mobile. And, according to Gartner, Q2 of 2013 was the first time that sales of smartphones surpassed sales of feature phones, with the former accounting for 51.8 percent of mobile phone sales worldwide.

That last fact in particular shows the unlocked potential of the mobile web. The mobile web is on fire, and a mobile content strategy is a must for 2014.

Unlock the Potential of the Mobile Web

Unlock the Potential of the Mobile Web

Evan Britton, CEO of FamousBirthdays.com, kicked off the IEEE International Symposium on Multimedia conference with a very engaging keynote entitled “Succeeding with content in a mobile world.” Britton has grown FamousBirthdays.com to 3 million unique visitors monthly, and smartphone traffic — which currently accounts for 40% of the site’s overall traffic — has grown 400% year over year. The site profiles famous people, and part of their content strategy is that they do so in an informative and concise format which is easy to read on smartphones.

Britton’s keynote discussed the psychology and expectations of mobile users, along with a great guide as to what to incorporate in a mobile strategy. This was followed by a checklist of “hurdles” or things to avoid when creating mobile friendly content. The talk culminated with some well known success stories that have deployed these mobile content strategies.

Mobile Users Are Different From Desktop Users

Mobile users are different from desktop users. Just as you don’t get to decide what platform your visitors use to access your site (they do), the same philosophy applies to your content development.

Let your users tell you what content they want to see. How can you do this? One great way is by leveraging query logs. Look at what those logs tell you about what users want to see. Just as Google will look at query logs to determine what questions to answer next, Britton applied the same strategy to his website and let user traffic tell him which celebrities the users themselves wanted information about. When queries came in and he had no corresponding content, he created that content. In essence, user demand drove the content creation.

Along the same lines of user engagement and content driving website strategy, Google updated its ranking advice on June 16, as reported by Barry Schwartz:

Previous:

“In general, webmasters can improve the rank of their sites by increasing the number of high-quality sites that link to their pages.”

Current:

“In general, webmasters can improve the rank of their sites by creating high-quality sites that users will want to use and share.”

In other words, content that engages the user and retains their attention is key. And, as mobile users are different from desktop users, here are some key takeaways in mobile content creation.

Expectations Of Mobile Users

Mobile users are different from desktop users and thus their expectations are different. They even use different devices for different things. As Bing pointed out in a recent SlideShare presentation entitled, “Choose your words carefully,” just as there is no one set of ad copy that works best across all devices — you have to tailor your content (be it an ad or the site itself) to the device.

Remember: when considering the mobile user, simple and to-the-point (concise) is key.

Mobile users are different from Desktop Users: People use different devices for different things

Mobile users are different from Desktop Users: People use different devices for different things

Consider the following about your typical mobile user:

  • They are on the go. Content must be easy to digest and directly to the point. Examples of success stories that bear this in mind are Twitter (140 character limit) and Vine, a twitter owned video-sharing app that leverages 6 second videos. i.e., content that errs on the side of being terse and to-the-point is a plus.
  • The screen is small. Vertical (up and down) scrolling is preferable — not horizontal. The challenge lies with the 320-pixel width limit.
  • Load time is critical, more so than ever. With the typical impatience of any Internet user exacerbated in an “on the go” situation (i.e., I want it, and I want it now) anything that significantly increases load time is to be avoided. Keep in mind also that Google’s mobile guidelines emphasize the importance of page speed.
  • Static is better. Avoid the use of animated content unless it is user requested. Avoid the use of animated GIFs, and make sure things like videos are click-to-play (play by default is to be avoided).
  • Less is more. Simplicity is critical, and a screen with a lot of “noisy” information can be a deterrent. As a specific example, look at the amazon mobile optimized website vs. the non optimized version below, and you can clearly see which has a stronger call to action.
Amazon mobile optimized website vs non-optimized: Less is more

Amazon mobile optimized website vs non-optimized: Less is more

Eliminate User Hurdles

User hurdles can be major obstacles, sometimes causing the user to instantly abandon the site. Britton gave the following checklist of potential hurdles and how to avoid them:

  • Unnecessary logins. Ever encounter those annoying login screens while on your mobile and suddenly lost attention or looked for an alternative mechanism to entertain yourself or find what you need? Make sure to avoid login screens when possible.
  • Too many steps: “Less is more” is a repeated and not to be underestimated philosophy in mobile design. Too many steps can make things overly complicated and time consuming, something the mobile user especially does not have time for.
Too Many Steps (Less is More)

Too Many Steps

  • Pop-ups. Make sure to eliminate popup usage for mobile content. Not only are these annoying to the user, they place an additional burden on load time.
  • Large logos: These take up most or all of your screen real estate on mobile — and don’t accomplish much else.  “A picture speaks louder than words” is a great philosophy to adopt and adhere to, but if you are using that picture to assist in explaining a piece of content, it probably ought not to be your website’s logo.
  • No clear call-to-action. Remember, mobile is about simplicity, not being “fancy” or “slick.” Make your call-to-action large. Do not try to brand on mobile; rather, focus on the user experience.

Examples Of Successful Mobile Content Strategies

This section details some of the preferred and effective strategies for handling content on mobile.

Headlines Work Better Than Stories

Headlines work better than stories for content discovery. Although the user will ultimately end up reading the story they are interested in, using primarily headlines is effective from a navigational perspective, as there is only so much screen real estate available in mobile.

Headlines Work Better Than Stories

Headlines Work Better Than Stories

Keep It Short And Concise

Typical success stories of companies that have adopted this strategy would be Twitter (140 characters) and Vine (6s video sharing app that is now owned by Twitter). Summly (purchased by Yahoo for $30 million) is another typical success story that was based on this design philosophy for mobile: simple, concise, less-is-more.

Consider Entity Search

Understanding the impact entity search has had on mobile is also critical. Since search results themselves are now entities, this enables and paves the way for a far better user experience, as entities (and hence SERPs) can themselves be actionable. They can contain relevant images, calls-to-action, navigation within a website to the appropriate location, directions to a location and/or the answer to the search query itself.

Typical entity results can be seen by looking at rich snippet examples in SERPs or Knowledge Graph search results. You can clearly see how entity search and entity results in SERPs provide a far better mobile experience for the user. Google Now cards are a classic example of an effective utilization of this strategy in mobile.

ENTITY SERPS in Google Now

Entity SERPS in Google Now

As you can see from the images above, the left-hand side results depict mobile entity SERPs where the entity is of type “event.” The right-hand side results depict where the entities in the SERPs are of type “tourist attraction.” Clearly, any search where the result set is a list of entities of the same type is ideally suited for a carousel-type display. Interestingly enough, when Google first acquired Metaweb, this was one example of the use of entities in search results which it cited as follows:

With efforts like rich snippets and the search answers feature, we’re just beginning to apply our understanding of the web to make search better. Type [barack obama birthday] in the search box and see the answer right at the top of the page. Or search for [events in San Jose] and see a list of specific events and dates. We can offer this kind of experience because we understand facts about real people and real events out in the world. But what about [colleges on the west coast with tuition under $30,000] or [actors over 40 who have won at least one oscar]? These are hard questions, and we’ve acquired Metaweb because we believe working together we’ll be able to provide better answers.

In other words, when all the entity results are of the same type of entity, there are all kinds of engaging visual displays that are possible, depending on how the information is aggregated. I presume A/B testing is conducted in real time on a wide scale to determine which specific result displays convert into the optimal user experience.

You can see how the desktop version of the same search will actually result in different display formats. Many of the entity SERPs (or rich snippets in this case) have different action items available.

ENTITY SERPS on a desktop - where all entities are of type "Event"

Entity SERPs on a desktop, where all entities are of type “event”

 

Final Thoughts

The tips provided above address the expectations of mobile users as well as hurdles to success in the mobile world. Success stories with respect to mobile content emphasize the need to keep things simple and concise. There is also a need to look to the future by considering entity search.

Remember that optimization for semantic search is, in fact, applicable to mobile. Check out 5 ways to unlock the benefits of semantic search for more ideas!

The post Succeeding With Content In A Mobile World appeared first on Search Engine Land.

]]>
/succeeding-with-content-in-a-mobile-world-180869/feed 8