Mark Munroe – Search Engine Land News On Search Engines, Search Engine Optimization (SEO) & Search Engine Marketing (SEM) Fri, 13 Nov 2015 18:23:47 +0000 en-US hourly 1 https://wordpress.org/?v=5.5.3 Can You Now Trust Google To Crawl Ajax Sites? /can-now-trust-google-crawl-ajax-sites-235267 /can-now-trust-google-crawl-ajax-sites-235267#respond Fri, 13 Nov 2015 14:18:37 +0000 http:/?p=235267 On October 14, Google announced it no longer recommends the Ajax crawling scheme they published in 2009. Columnist Mark Munroe dives into the question of whether this means you can now count on Google to successfully crawl and index an Ajax site.

The post Can You Now Trust Google To Crawl Ajax Sites? appeared first on Search Engine Land.

]]>

Web designers and engineers love Ajax for building Single Page Applications (SPA) with popular frameworks like Angular and React. Pure Ajax implementations can provide a smooth, interactive web application that performs more like a dedicated desktop application.

With a SPA, generally, the HTML content is not loaded into the browser on the initial fetch of the web page. Ajax uses JavaScript to dynamically communicate with the web server to create the HTML to render the page and interact with the user. (There is a technique called “Server-Side Rendering” where the JavaScript is actually executed on the server and the page request is returned with the rendered HTML. However, this approach is not yet supported on all the SPA frameworks and adds complexity to development.)

One of the issues with SPA Ajax sites has been SEO. Google has actually been crawling some JavaScript content for a while. In fact, this recent series of tests confirmed Google’s ability to crawl links, metadata and content inserted via JavaScript. However, websites using pure SPA Ajax frameworks have historically experienced challenges with SEO.

Back in 2009, Google came up with a solution to make Ajax crawlable. That method either creates “escaped fragment” URLs (ugly URLs) or more recently, clean URLs with a Meta=”fragment” tag on the page.

The escaped fragment URL or meta fragment tag instructs Google to go out and get a pre-rendered version of the page which has executed all the JavaScript and has the full HTML that Google can parse and index. In this method, the spider serves up a totally different page source code (HTML vs. JavaScript).

With the word out that Google crawls JavaScript, many sites have decided to let Google crawl their SPA Ajax sites. In general, that has not been very successful. In the past year, I have consulted for a couple of websites with an Ajax Angular implementation. Google had some success, and about 30 percent of the pages in Google’s cache were fully rendered. The other 70 percent were blank.

A popular food site switched to Angular, believing that Google could crawl it. They lost about 70 percent of their organic traffic and are still recovering from that debacle. Ultimately, both sites went to pre-rendering HTML snapshots, the recommended Ajax crawling solution at the time.

And then, on Oct 14, Google said this:

We are no longer recommending the AJAX crawling proposal we made back in 2009.

Note that they are still supporting their old proposal. (There have been some articles announcing that they are no longer supporting it, but that is not true — they are simply no longer recommending that approach.)

In deprecating the old recommendation, they seemed to be saying they can now crawl Ajax.

Then, just a week after the announcement, a client with a newly launched site asked me to check it out. This was an Angular site, again an SPA Ajax implementation.

Upon examining Google’s index and cache, we saw some partially indexed pages without all the content getting crawled. I reiterated my earlier recommendation of using HTML snapshots or progressive enhancement.

This site was built with Angular, which does not yet support server-side rendering (again, in this case, the server initially renders a page to serve up the HTML document), so progressive enhancement would be difficult to support, and HTML snapshots are still the best solution for them.

She replied, “But why? Everything I read tells me Google can crawl Ajax.”

Can they? Let’s take a deeper look at the new recommendation in regard to Ajax.

Google’s New Ajax Recommendations

In explaining why they are deprecating the old recommendation, they say (emphasis mine):

We are generally able to render and understand your web pages like modern browsers.

Many people might be quick to conclude that they can now crawl Ajax without a problem. But look at the language: “generally able”? Would you bet your business revenue on the knowledge that Google is “generally able” to understand your page?

Could it be I am just picking on semantics? Let’s examine the announcement further. Later in their announcement, they state in regard to Ajax:

Since the assumptions for our 2009 proposal are no longer valid, we recommend following the principles of progressive enhancement.

They don’t spell it out in their announcement, but by recommending progressive enhancement (which loads some HTML for browsers that don’t support JavaScript), they are appear to be implicitly saying, “Don’t count on us crawling your JavaScript.” Why recommend this method if indeed Google can consistently crawl SPA Ajax sites?

I worried that I was perhaps overanalyzing Google’s words, but then…

John Mueller Confirms Google Still Has Trouble With Ajax 

On October 27 (less than two weeks after the Google announcement), John Mueller, on his Webmaster Central Hangout, confirmed that Google indeed still has problems with Ajax.

You can view the exchange at about around 1:08:00 into the video, where there was a question relating to a specific Angular implementation:

They still have trouble with rendering, and they expect to get better over time. John recommends some actions to help debug the issues.

Ultimately, he recommended using HTML snapshots until Google gets better at Ajax (Yes, the method that was just officially deprecated).

So, What To Do? 

  • Progressive enhancement. Server-side rendering would be required for progressive enhancement, and it is not yet supported by Angular. However, the upcoming Angular 2.0 will support server-side rendering. React does, in fact, support server-side rendering today.

    This is, however, more work than simply creating HTML snapshots. You need to make sure you render any required links so Google can crawl and index additional content that is loaded into the page.

    Nevertheless, for sites using an Ajax framework, this would be my recommended approach. (And, of course, it is Google’s recommended approach.)

  • Pre-rendering HTML snapshots. Again, don’t be confused if you have heard or read that Google no longer supports this method. They will continue to support it for the foreseeable future. They are just no longer recommending it.

    This method works; however, writing the code to pre-render and serve up the snapshots is not trivial. The good news is, there are several vendors out there such as prerender.io who will do the work for you at a relatively low cost. That is probably the simplest approach.

    This method is not ideal. Serving different source code to crawlers vs. browsers (HTML vs. JavaScript) can be problematic. It can be considered a cloaking technique, and it is not necessarily obvious what the bots are getting served. It’s important to monitor Google’s cache to make sure that they are not getting served the wrong page.

    Nevertheless, if you use a platform that does not support server-side rendering, then this may be your only solution.

Better Safe Than Sorry

Even if I had seen evidence that Google was consistently crawling Ajax sites, I would still be wary. It takes far more resources and much more time to fully render a page than to simply serve up HTML.

What will happen to sites with hundreds of thousands or millions of pages? How will it impact crawl budget? Will the crawl rate remain consistent?

Before recommending this approach, I’d rather wait and see strong evidence that Google can and does consistently crawl large, pure Ajax Single Page Applications with no negative impact on crawl rate, indexing and rankings. Please do share your own experiences.

The post Can You Now Trust Google To Crawl Ajax Sites? appeared first on Search Engine Land.

]]>
/can-now-trust-google-crawl-ajax-sites-235267/feed 0
SEO Disasters: Preventing The Unthinkable /seo-disasters-preventing-unthinkable-230403 /seo-disasters-preventing-unthinkable-230403#respond Fri, 18 Sep 2015 13:30:25 +0000 http:/?p=230403 Is your site suffering from a sudden drop in traffic? Columnist Mark Munroe notes that most SEO issues are preventable and shares his process for keeping these issues at a minimum.

The post SEO Disasters: Preventing The Unthinkable appeared first on Search Engine Land.

]]>

Like any SEO veteran, I can recount my share of horror stories — launching Google Analytics and noticing that sudden, sickening drop in traffic.

Sometimes, sudden drops in traffic may be the result of an algorithm changes (such as Panda). However, in many cases, they are caused by bugs, inadvertent changes or overambitious engineers with a little bit of SEO knowledge.

In this article, I will examine three real-life case studies and outline the steps necessary for SEO disaster prevention.

Case #1: “Something Bad Is Happening To My Website.”

I was at a company offsite, and my phone had been vibrating with several calls. I left my meeting and saw that my good friend (let’s call him “Tony”) had called several times and left a message: “I think something bad happening to my website. Traffic is crashing. Some sort of SEO problem.”

Tony runs iFly, an extremely successful airport information site. Like many of us, he is very dependent on Google traffic; an SEO issue would be a big problem, indeed.

To give you an idea of how bad “something bad” was, look at this drop:

Ifly

Murphy’s Law: “Anything that can go wrong will go wrong.

“You found what on my site?”

When I first got the call from Tony, I suspected Panda because an update to the algorithm had just been released. Fortunately for Tony and iFly, it was not a Panda event.

The first thing I noticed was that the traffic drop impacted organic search traffic from both Google and Bing. That was great news, as it implied a technical problem. I took a look at the source, and lo and behold:

<meta name=”robots” content=”noindex“>

In the iFly case, there was a tiny bug that resulted in the meta tag above being inserted into every page in the site. This one line of code was the cause of all his troubles. Essentially, it told Google and Bing to ignore every page of his site. They obliged.

Case #2: “Our Traffic Seems Soft.”

Early in my career, I ran product management for a site called RealTravel. You won’t find it online anymore — it was sold to Groupon and eventually shut down.

One morning, I arrived at work and got ready for my usual daily kickoff of coffee and Google Analytics. Our CEO came over with a worried look on his face.

“Our traffic seems kind of soft, can you see what’s going on?” We had been on a traffic upswing.

I generally open my Google Analytics reports with optimistic anticipation. That day, though, I had a sense of dread. Sure enough, we were down about 20 percentage points. It was a kick right in the gut.

The next day, we dropped a bit more, and then even more. Within a couple of weeks, we were down about 30 percent. This was a time when we needed to show traffic and go for funding. It’s hard to get funding when your traffic does this:

RealTravel traffic drop chart

Smith’s Law: “Murphy was an optimist.”

“Sorry, I made a server configuration change.”

Over the next week or so, I scoured our site, our rankings, Webmaster Tools, and our backlink reports to try to figure out what was going on.

I couldn’t initially imagine that it was related to changes we’d made a year earlier when we had gone through a site re-architecture. We had handled things correctly. We’d put in 301 redirects as Google recommends, mapping the old content to the new content. Things had gone swimmingly after the relaunch. In fact, traffic had shown a steady increase. A year later, I was not considering the site architecture as the potential cause. So naive…

A couple of weeks into the problem, I noticed that some old URLs were giving 404 errors in Google Webmaster Tools.

I started checking more of the old URLs and saw that many of them were 404-ing. My engineer looked into it and told me, apologetically, that he had made a configuration change that broke the redirects. Yikes! We had lost the value of all our historic deep links, and our domain authority had suffered.

Traffic eventually came back, but it was serious damage to a struggling startup.

Broken redirects: a nasty problem

There are lots of very valid reasons to change URLs. In our case, the new site architecture required it. Other reasons include:

  • Including the primary keyword in the URL.
  • Removing URL parameters.
  • Creating user-friendly URLs.
  • Changing the folder structure.
  • Moving to a new domain.
  • Switching to HTTPS (yes, those are different URLs).
  • Normalizing on a domain name with or without “www.”

While the way to handle redirects is well known (the use of 301 redirects) and most SEOs will get them set up correctly, their usage creates a dangerous vulnerability if you have strong external links going into the old URLs. It hit RealTravel severely.

Any problems immediately after the change are quite obvious. Google would still have the old URLs in the index, and clicking on the results would lead to 404s, so you would see no traffic coming into the new pages.

However, when it happens well after the fact, it is a particularly insidious problem. The site still works fine. Google’s index contains the new URLs, so click-throughs from the SERP work fine. You might notice a traffic drop in referral traffic, but for most sites, that is insignificant. You only notice a problem if you happen to click on a link from an external site.

Eventually, you will see the 404s in the Google Search Console crawl error report. However, there is a lag there, and by the time you notice these 404s, a good deal of the damage may have been done. Worst of all, if it’s only a few high-value URLs, you might not even notice.

I learned my lesson. Now, any time I do a URL change or site move since, I insist on monitoring the redirects with test scripts, forever. Very, very few sites have a policy like this.

Case #3: “We Think Bing Has Hurt Our Google SEO.” 

I joined a large consumer top 100 US website to manage the SEO team. This is one of the most visible sites on the Web, one that you would surely know.

After years of hockey-stick growth, they had a sudden flattening of traffic over the previous six months. The director of product management told me something had been wrong with SEO.

They were pretty sure they’d identified the problem and believed that Bing was the cause. Bing had started crawling the site at an incredible pace, which had bogged down the servers (they had dedicated servers for the crawlers). In their estimation, the server response time had degraded such that Google was crawling fewer pages.

It was a good theory, and they had pretty much settled on it. Still, even after they had resolved this issue, traffic had not bounced back. The idea of  blaming Bing for a Google problem was a good story. I found the irony quite amusing.

I wasn’t quite buying it. This was not definitive enough for me. I wanted to dig in, so I did.

Like many development teams these days, they had an agile website, with bi-weekly releases and frequent patches. I asked for a history of changes so I could get a handle on potential causes for the traffic drop. I was provided a bunch of cryptic engineering release notes from three or four of the website updates over that six-month period.  Lots of stuff had changed, but nobody knew what, beyond a couple of major features.

I can’t show actual traffic numbers, but SimilarWeb highlights when the apparent problem surfaced.

traffic-flat-mm

Munroe’s Law: “With SEO, whatever can break, will break. Whatever can’t possibly break will also break.”

The Missing Links

This one was tricky to detect. They had actually introduced a bug six months earlier that was obscure and far from obvious.

The site was dependent on long-tail traffic from a specific type of content that was only crawlable via a deep HTML sitemap. The bug removed about half the content from these sitemaps. This issue went undetected for so long because HTML sitemaps are rarely visited pages.

Even so, the pages were not broken. The only way to detect his problem would have been to audit the links on these pages vs. the content that should be generated on those pages (which is how I detected the problem).

More Gotchas 

With the increasing technical complexity of websites and SEO, there is a huge list of potential issues which can affect traffic and rankings. At SEORadar (my company), some of the issues we have come across include:

  • Rel=canonicals. A common problem is broken or accidental removal of the rel=”canonical” link element, which specifies the canonical page among those which have duplicate content. Rel=canonical problems abound, as many pages have logic to determine whether to put on a self-referential rel=canonical or legitimately point to another URL.
  • Nofollow links. We’ve come across sites with “nofollow” link tags for all the links on a home page. In one case, all the paginated links on a blog were nofollowed, preventing multi-paged posts from getting link equity.
  • Meta Robots. We’ve seen the nofollow, as well as the noindex meta tags, on pages where they did not belong. We’ve also seen noindex removed inadvertently, which led to Google’s index getting flooded with low-quality pages.
  • Robots.txt. Sometimes, websites inadvertently configure their robots.txt file to block pages and or resources (blocking CSS can be a major mobile SEO issue).
  • 301 redirects. The most common 301 redirect issues involve accidentally removing redirects or morphing 301s into 302s.
  • Page titles. We’ve seen these getting accidentally wiped out or set to the domain name.
  • H1 tags. Issues often arise when <h1> tags lose keywords and/or get deleted from the page.
  • AJAX sites. Sometimes, websites using AJAX serve the wrong content to the search engine spiders. (This is particularly nasty, since spiders get their own distinct version of the page, and you need to regularly look at what the spider is getting served.)
  • Hreflang. The hreflang tag, which allows you to specify which version of a page should be served based on the user’s language or region, sometimes gets deleted.
  • Footer and Navigation links. Problems arise when links to important content are deleted.
  • Cross-links. Cross-link to important content might be removed, diminishing that content’s authority.
  • Meta Descriptions. Issues arise when the same meta description is posted to every page on the site or getting deleted.

Reality Is Stranger Than Fiction

Some cases are so strange that you would have trouble believing it:

  • Imagine a site that has built a nice, friendly m.dot implementation. However, the mobile crawler was not seeing this site. Apparently, there was some old cloaking code that served up the old “dumb” phone/feature phone version of the site to the spiders. Yes, they went to the trouble of building a great mobile site and hid it from the spiders.
  • Another site, a software service, had an opt-out page to cancel their service. Somehow, an entire content section of the site had rel=”canonical” links pointing back to the opt-out page (ironically strengthening the one page on the site they did not want users to actually see).
  • On one site we looked at, there was some cloaking logic serving bad rel=canonicals to the crawler. If you looked at the page source from your browser, the rel=canonicals looked fine. To catch this, you had to compare the rel=canonicals in Google’s cache to what was on the user’s page.

You can’t make this stuff up!

Why Do So Many Things Break?

How can these things possibly happen?

  • Complexity and “if” statements. Consider the meta robots “noindex” tag:
    • Some content you may want indexed, and some you may not. In that case, there is often logic that is executed to determine whether or not to insert the “noindex” tag. Any time there is logic and “if” statements, you run a risk of a bug.
    • Typically, sites have staging environments or subdomains they don’t want to be indexed. In this case, logic is needed to detect the environment — another opportunity for a bug if that logic gets pushed live.
    • Sometimes, developers copy templates to build a new page. That old template may have had a noindex.
  • Frequent releases. Most websites have at least weekly updates, as well as patch releases. Every update presents a risk.
  • CMS updates. Manual updates to content via CMS can completely de-optimize a page. For sites that use WordPress, it is very easy to accidentally noindex a page if you use the Yoast plugin. In fact, I know one very prominent site that noindexed their most visited blog post.
  • Competing interests. Many hands with competing interests all have the potential to muck up your SEO. Whether it’s designers deleting some important text, product managers deleting navigation, or engineers using AJAX and hiding important content, the risk is ever-present that something can go wrong.

What Happened To My Site?

Most websites do not have a good handle on what updates and changes have been made to their website (and when). Sure, there might be some well-written release notes (although cryptic seems more common), but that won’t tell you exactly what changed on a page.

How can you research a traffic drop if you don’t know what has changed? The current modus operandi is to go to the Internet Archive and hope they have a version of the page you are interested in.

This is not just an SEO issue. It impacts conversion, monetization, UX metrics — in fact, all website KPIs. How can you understand what’s causing a shift in your KPIs if you don’t know exactly what changed and when?

SEO Testing

SEO testing is also a big problem.

Let’s say there are 10 important page templates for a site and 20 things you want to verify on each page template. That’s 200 verifications you must do with every release.

These tests are not black and white.  The existing of a no-index tag on a page is not necessarily a problem. Just because a page has a title, doesn’t mean it has the right title and it hasn’t changed.   Just because there is a rel canonical, doesn’t mean it is linking to the right destination.  It’s easy to write a test script to tell you that a title is missing.  It is difficult to tell the title has had it’s keywords removed.

Gaining Control

Murphy’s First Corollary: “Left to themselves, things tend to go from bad to worse.” 

… so they can’t be left to themselves.

I’ve painted a pretty scary picture here. SEO is fragile. I am sure many sites have lost traffic with no clue that they are the victims of a preventable problem.

Every SEO needs to have tools and processes in place to prevent these mishaps, and implement education programs, so that every part of an enterprise (large or small) understands what can impact SEO.

Audit

You can use an audit to get a good baseline of your site. While it won’t tell you whether a title has been changed and important keywords lost, you can use it to make sure there are no problems with the current state of the site.

All the major enterprise platforms provide some audit capabilities, and you can supplement this with tools like Moz, Raven and Screaming Frog.

If you have a good handle of the current state of your site, then life becomes easier; you can focus on looking at changes when new releases are pushed. However, an audit is just the starting point.

Change Monitoring

Whenever a new version of your site is pushed live — or better yet, is still in staging — the most critical thing to do is a change analysis.

If you know (from your audit) that the site was in a good state before the push, then just looking for changes will minimize your work. You only need to take a close look at things that have changed to validate that nothing has broken.

It’s best to use something that monitors and alerts you to change and potential problems. For that, I recommend:

  • SEORadar (disclaimer: this is the company I founded). It is focused explicitly on monitoring changes and generating alerts. You can get a fair amount of protection with the free version, which includes redirect monitoring if you have changed URLs.
  • RioSEO. This is the only other extensive change monitoring system focused on SEO that I know of.  It’s contained within their enterprise platform.
  • In-house developed tools and scripts. Some companies prefer to develop their own test and monitoring solutions.
  • Robotto. Somewhat limited, but still useful. It monitors for robots.txt changes and basic HTTP redirect tests.
  • Manual process. If you maintain a regular archive of your pages, you can set up a manual process. You will need this to analyze changes, and this will be invaluable if you need to troubleshoot SEO or other site issues. For instance, you can do this by saving audits run with Screaming Frog.
  • The Internet Archive. This is another possibility for an archive; however, it is hit or miss on whether it will have the page you are looking for.

What Pages Should You Monitor?

  • Page templates or page types that generate significant traffic. For instance, on an e-commerce site, that would include product pages, category pages and brand pages. Generally, these pages are generated algorithmically off of the same page template, so monitoring a few instances of each page type should suffice.
  • Pages that are important for indexing (HTML sitemaps, indexes and other pages that play a major role in getting content indexed, even if they don’t generate much traffic.
  • The home page.
  • Individual content pieces or blog posts that generate significant traffic or are otherwise important strategically.

Links, Links… Precious Links

Links are more precious than ever in 2015. Imagine that a simple, hard-to-detect server configuration issue could negate the value of many of your deep links (and all that PR and branding you did to get those links)?

Don’t let a server configuration problem damage your domain authority, as in the case of RealTravel. If you have changed your URLs (or even switched to HTTPS), you want to know that those redirects are always in place. You need to monitor those links forever and ever. Either write custom scripts or use an app/software that monitors your old URLs.

Education

With SEO, you will be in a state of perpetual education and training. You want everyone on your team protecting and preserving your website’s SEO. They have to know the risks and potential consequences of various changes to the site.

To achieve that, you need to focus on education by doing the following:

  • Periodic training with  product managers, engineers and quality assurance.
  • Push as much testing as possible over to QA. Provide them with a checklist for their verification process. However, do not depend on it. If something breaks, all eyes will be on SEO.
  • While company-wide training is good, individual training with each team is important, as well. It is more personalized and helps get buy-in and commitment.
  • Provide backup material and cheat sheets describing all the on-page elements of importance.
  • Over-communicate whenever some sort of SEO discussion comes up. Always explain the “why.” What is second nature to you is going to be very slow to register with people who are not involved with SEO. People just won’t get it from sitting through a single PowerPoint presentation — it takes repetition.

Process

SEOs shouldn’t fully depend on other teams to maintain a website’s SEO. We must be proactive, as well, by developing a process for monitoring and identifying site changes, and we must make this process a habit.

  • Know what changes are being made to your website. Attend product meetings.
  • Have a review with your QA team to quickly highlight potential problem areas in each new release to help them understand where they should focus.
  • Repeat SEO training sessions on a regular schedule. Make sure new employees get indoctrinated.
  • Test the site before a release goes out.
  • Test again after the release goes out.

Control The Controllable 

Search engine optimization, particularly in-house SEO, is incredibly stressful. We have to control the uncontrollable and predict the unpredictable.

Let’s make things a bit easier by controlling the controllable and avoiding those SEO disasters. If you have any additional cautionary tales, please share.

The post SEO Disasters: Preventing The Unthinkable appeared first on Search Engine Land.

]]>
/seo-disasters-preventing-unthinkable-230403/feed 0
From SEO To SXO: Search Experience Optimization /seo-sxo-search-experience-optimization-223812 /seo-sxo-search-experience-optimization-223812#respond Fri, 07 Aug 2015 14:34:36 +0000 http:/?p=223812 When it comes to search engine optimization, columnist Mark Munroe suggests you need to think beyond your website.

The post From SEO To SXO: Search Experience Optimization appeared first on Search Engine Land.

]]>

How does one win at SEO in 2015 and beyond?

Some of the directives we have been hearing for years are truer than ever:

“Just create great content!”

“Content is king!”

“Build a quality site!”

But what is “great”? How do you measure “quality”?

You can’t evaluate content quality without considering the expectations of the user. It doesn’t matter how well the content is written if it’s out of sync with the user’s expectations. Great content, in the context of search, means you have moved beyond SEO to Search Experience Optimization (SXO).

The Search Experience Starts On Google And Ends On Google

Typically, user experience (UX) optimization focuses on optimizing success metrics. Perhaps those metrics are based upon converting users into buyers, gathering email addresses, generating page views or getting users to click on ads.

These are your metrics, not Google’s — they are your customers, not Google’s. With SXO, you need to focus on Google’s customer. This experience starts and ends on Google.

We have access to all sorts of metrics about our own site, some of them very clear (like a purchase) and some vague (like bounce rate). However, we have little access to how Google may be measuring results on their own site. It’s a deep black hole. To optimize the search experience, we must shed some light on that darkness!

You Can’t Always Get What You Want

We want users who are ready to buy, book, subscribe or otherwise take an action that’s good for our business. The following chart shows a hypothetical breakdown of how search visitors might interact with a website:

UserActions

 

In this case, 60% of the users never take a single action. Did they get what they want? Who are these people? Why are they landing on your site?

A mere 10% took an action that can reasonably be viewed as a successful visit (a sign-up or a purchase). What about everyone else? Even those who didn’t bounce? Did they walk away from the site frustrated or happy?

We don’t always get the visitors we want. Search Experience Optimization means optimizing the user experience for the users we get, as well as the ones we want! Not only will that align with what Google wants, but a better understanding of all our users will help our business objectives, as well.

What Google Wants: Provide An Answer!

Except for navigational searches, almost all searches are questions, even when they are not phrased as such. Very simply, Google wants to provide answers — as evidenced by the increasing number of direct answers appearing within search results.

Consider the following searches:

search-intent-examples

Google is successful when it provides an answer — but how does Google know if it has done so successfully, especially when the query is not obviously a question?

How Does Google Evaluate?

Obviously, Google has its own internal metrics to measure the quality of its search results. Just like our own sites, Google must have metrics based on what users click on — in fact, Google recently confirmed this.

It makes sense that Google analyzes click behavior. Likely and oft-discussed metrics it is looking at include:

  • Short click. A “short click” is a quick return from a website to Google. Clearly, a very quick return is not a good signal.
  • Long click. This refers to a long delay before the user returns to Google. Longer is better.
  • Pogosticking. This is when a searcher bounces back and forth between several search results.
  • Click-through rate. How often users click on a given result compared with how often it is displayed (expressed as a percentage).
  • Next click. What a user clicks on after “pogosticking” back to Google (Either they click on an existing search listing or perform a new search).
  • Next search. When a user moves on to a new search.
  • Click rate on second search. When a previous page is elevated due to a personalized search and/or a previous click.

The Next Click

The most telling signal to Google may very well be the “next click.” If Google wants to provide the answer to a query, the next user click tells them what they need to know. Did the user find someplace to buy their New Balance running shoes? Or a review for the B&B in Napa?

If a user returns and clicks a different search result from the same query — or, upon a subsequent visit to Google, repeats the same query — that could be a signal that the initial search was not satisfied. If a user comes back and does a completely new search, that could mean the user was satisfied with the result.

Google has yet to confirm that click behavior directly influences rankings. It’s hard for me to imagine that it doesn’t. But even if it doesn’t affect rankings, Google likely uses it to influence and evaluate other changes to their algorithm. Either way, if the appearance of your site in Google’s SERP improves their metrics, that can only be fantastic for your organic search.

Kill The Search

Therefore, to optimize the search user experience, you must end the user quest and kill the search.

The user must have no reason to go back to Google and continue their quest for an answer. We will never know what that next click is. However, we can influence that next click by understanding our users.

To “kill the search,” we need to understand why users are landing on our page. What was the question or need that drove them to Google in the first place?

Consider a hotel site when a user searches for a specific hotel:

  • Are they price shopping?
  • Looking for reviews?
  • In need of driving directions?
  • Researching amenities?

Of course, we can make educated guesses. But we can do better.

Keyword Data

Keyword data is a good place to start. You can start by examining keyword data from Webmaster Tools (now rebranded as the Search Console) and looking for modifiers that reveal intent. Look at keywords for specific page types and high-traffic individual pages.

Many keywords will be vague and not reveal intent. If you are a travel site, for example, you might see “Hyatt Regency” 100 times with no modifiers and only 20 times with modifiers (such as “reviews,” “directions” or “location”). The frequency of those modifiers can give you a good idea of the broad questions users have when they land on your site.

This is only a starting point. There might be many user queries about which you have no data, simply because you do not rank for those queries. That’s when you need to go to keyword tools like SEMrush or the Google Keyword Planner. I also like to use UberSuggest to get a good overview of what the user mindset is. (Although it does not have query volume, it catches many variations you don’t see in the other tools.)

UberSuggest

Keyword data is a good start toward getting into our users’ heads. But that’s only a start. Let’s take it further.

SEO Surveys

Surveys are fantastic tools to help you understand why people landed on your site. I’ve been doing SEO surveys for many years, using tools like SurveyMonkey and Qualaroo. Of course, surveys themselves are disrupting to the user experience, so I only keep them running long enough to reach statistical significance. I usually find 100 responses is sufficient. Things to keep in mind:

  1. You want to segment users based on search. This is an SEO survey, so it is only triggered for search visitors. (Of course, it’s useful to extend the survey to other segments, too.)
  2. The purpose of this survey is to understand why the user landed on your site. What was the question or problem that drove them to search?
  3. You need to trigger the survey very quickly. If you wait too long, you will have lost the opportunity to include the people who bounced very quickly (Those are particularly the people you want to catch!). Generally, I launch after 10 or 15 seconds.
  4. The surveys should be segmented by page type. For example, people landing on a hotel property page on a travel site have very different motives from those of people landing on a city/hotel page. For high-traffic content pieces, you want to survey those pages individually.
  5. Your survey segments should represent a significant portion of your SEO traffic.

Ask the users, “Why did you visit this site today?” and list different options for the reasons. Make sure you list an “other” to capture reasons you might not have thought of. For instance, on a real estate home sales site, I have asked if users were looking for:

  • A home to buy
  • A home to rent
  • Home prices
  • School information
  • A house estimate
  • Open houses
  • Maps

Based on your survey data, you can create a prioritized list of user needs. Often, you will find surprises which can also turn into opportunities. For example, suppose you survey users who land on your real estate site on a “home for sale” page, and you discover that 20% would also consider renting. That could be a great cross-marketing opportunity.

Statistical Significance

You want to satisfy your users, but you can’t please everyone. You will need to prioritize your improvements so that they meet the needs of as large a percentage of your visitors as possible.

For example, if 25% of visitors to a restaurant page want to make a reservation, and they can’t (either because the functionality isn’t there or due to usability problems), you have an issue. If only 1% want driving directions, that is a much smaller issue.

10 Seconds Is All You Get

UX expert Jakob Nielsen performed an analysis on a Microsoft Research study a couple of years ago that showed that the largest visitor drop-off came in the first 10 seconds. If you get past 10 seconds, then users will give your site a chance. This means you have a maximum of 10 seconds to convince visitors that you:

  • Have the answer to their question
  • Have an answer that they can trust
  • Will make it easy to get their answer

That’s a tall order, and your page design needs to balance many competing priorities. To design an effective landing page, you need to know what visitors’ questions are.

SEO Usability Testing

Usability testing is a great tool to help determine how successful users are at meeting all their goals. In 2015, it definitely should be considered part of an SEO’s role. If that task falls to the UX or product team, work with them to make sure your tests are covered. If not, then take the lead and feed the results back to those organizations.

For SEOs who don’t have experience with usability, I suggest Rocket Surgery Made Easy. Additionally, there are online services which provide valuable, lightweight and rapid test results. I’ve used both UserTesting.com (for more extensive tests) and FiveSecondtest.com for quick reactions from users. Here are some tips specific to SEO usability testing:

  • Create a good SEO user scenario. Set the context and the objective. Start them on a search result page so you can observe the transition from search result page to your landing page.
  • Focus your landing pages by using page templates that get the most traffic.
  • Focus on the dominant problems, and use cases that you have identified by keyword analysis and surveys.

Consistent Titles & Meta Descriptions

If every search is a question, every result in the search results is a promise of an answer. Please make sure your titles are representative of what your site provides.

If you have reviews on 50% of your products, only mention that in the title and meta-descriptions of products that actually have reviews. Otherwise, you will be getting bad clicks and unhappy users. Another example is the use of the word “free.” If you say “free,” you’d better have “free!”

An Improved Web Product

The role of a successful SEO has broadened, and it demands that we understand and solve our visitors’ problems. It’s a huge challenge for SEOs, as we need to broaden our skill set and inject ourselves into different organizations. However, this is much more interesting and rewarding compared to link building.

Ultimately, the role of SEO will become even more critical within an organization, as the learnings and improvements will broadly benefit a website beyond just improving SEO. The great thing about the transition from Search Engine Optimization to Search Experience Optimization is that the end result is not just more traffic, it’s a better Web product.

The post From SEO To SXO: Search Experience Optimization appeared first on Search Engine Land.

]]>
/seo-sxo-search-experience-optimization-223812/feed 0
Consider Interstitials When Mobile-Friendly-ing Your Site For Google /consider-interstitials-when-mobile-friendly-ing-your-site-for-google-217970 Fri, 03 Apr 2015 15:49:31 +0000 http:/?p=217970 Columnist Mark Munroe explores an element of mobile SEO that may not have occurred to some webmasters.

The post Consider Interstitials When Mobile-Friendly-ing Your Site For Google appeared first on Search Engine Land.

]]>

When you’re prepping your site for the mobile-friendly algorithm update Google’s rolling out on April 21, don’t forget to consider a key element of the user experience — interstitials.

Many mobile Web sites begin users’ visits by presenting them with an interstitial page instead of sending them to the intended piece of content, with the goal of getting the user to do a specific action. In the examples below, Zulily wants users to sign up while Indiegogo asks users to download its app.

mobile-zulily-emailsignup indiegogo-interstitial

Users typically find these pages quite annoying. But when businesses are weighing the benefits of promoting their app or spurring sign-ups versus the user experience, the interstitials often win out.

However, with Google’s new focus on the mobile user experience, it may be time to reconsider the use of disruptive interstitials. Read my full column on the subject on Marketing Land.

The post Consider Interstitials When Mobile-Friendly-ing Your Site For Google appeared first on Search Engine Land.

]]>