Aleyda Solis – Search Engine Land News On Search Engines, Search Engine Optimization (SEO) & Search Engine Marketing (SEM) Tue, 13 Feb 2018 14:36:41 +0000 en-US hourly 1 https://wordpress.org/?v=5.5.1 Monitoring web migrations: A checklist for moving from one site to another /monitoring-web-migrations-a-checklist-for-an-effective-switch-291131 Mon, 12 Feb 2018 14:56:31 +0000 /?p=291131 Contributor Aleyda Solis walks through what to monitor during a web migration and explains how to fix issues so you don't lose traffic and rankings.

The post Monitoring web migrations: A checklist for moving from one site to another appeared first on Search Engine Land.

]]>

Whether it is a website rebranding, a consolidation of different web properties or an HTTP to HTTPs migration, when you are implementing a structural web change, it is critical to monitor the crawling, indexing, rankings, traffic and organic search conversions on both the old and new web locations. Careful tracking will enable you to fix any potential problem as they arise.

Besides establishing a relevant strategy to follow that include search engine optimization (SEO) best practices, here are the most important areas and steps to monitor during the web migration stages.  Be ready to identify any issues that could cause a negative impact, while also identifying opportunities.

Getting started

Start tracking your organic search visibility on the old and new web locations at least a couple of months before the migration takes place.  This will make it easier to identify any unexpected and inconsistent behavior when the change happens.

Old vs. new web crawling

Let’s start with the most fundamental aspects to validate and monitor after executing a migration:

  • Can the URLs be crawled in the new web location? Make sure the pages and resources are now effectively accessible in their new addresses.
  • Have the 301 redirects from the old to the new URL locations been effectively implemented and tested?

This can be done by using an SEO crawler like Screaming Frog, Sitebulb or Deepcrawl, as well as SEMrush, Ryte or Sistrix, to simulate the search crawlers’ behavior. You can also check (new and old) websites log files.

You can start by simulating both the desktop and smartphone search engine user agents to validate the site’s universal resource locator (URL) accessibility in the new destination, as well as using the “list crawl” option to verify the hypertext transfer protocol (HTTP) status of all the old URLs.

You are looking to see if all the links are 301-redirecting to relevant and correctly configured new URLs. If the links are redirecting correctly, the new URLs will be indexed rather than the old, and the old will not show a 200 OK HTTP or even 404 HTTP error status in the old location, avoiding redirects loops or chains redirects that are more difficult to follow.

To avoid missing old URLs, it is ideal to create a list of them by using different data sources from former pages to migrate and monitor:

  • Crawlable URLs in the old location to be found with crawling simulations.
  • URLs with any traffic in the last months or year.
  • URLs with any top 20 rankings.
  • URLs with any external link.

Most SEO crawlers will integrate with other tools. For example, with Deepcrawl you can integrate data from Google Search Console, Google Analytics, Majestic and log files that identify potential crawling gaps. This includes orphan URLs that might not be used anymore but brought value and should be migrated to a new URL version.

It is also advisable to schedule continuous crawls, especially the days before and after the migration happens, and compare with previous results to see if the change is consistent and the redirects haven’t been changed since being set.

I recommended you use a web monitoring service that will send an alert if there are changes in the HTTP status or other SEO-related configurations on the pages you are monitoring.

LittleWarden

Old vs. new web indexing

Monitor the relevant new pages and resources to see if they have been indexed in their new location while the old ones are disappearing from Google’s index. This is easier to do now with the new Google Search Console, where you can add the old and new web locations involved in the migration as independent properties to track.

In the new Index Coverage report, you can see what pages have been effectively indexed from the new location and which have been left out. The report will provide a reason (using a noindex tag, canonicalizing toward others and so on) so you can rectify the problem and re-submit them to the index when the issue is fixed.

The same should be done with the old web location. In this case, check to be sure all of the URLs are 301-redirecting to their new locations and the redirects are going to their new relevant locations.

Old vs. new web rankings

It is also critical to monitor rankings and traffic in the new and old web locations for both desktop and mobile search results. Use both internal and external ranking sources to make sure any rankings lost from the old web location are effectively gained on the new one.

You can easily use the Google Search Console performance report to track countries that are targeted, plus images, videos and rich results.

Monitor the old site for a drop in trend and rankings while the new one is moving up. It is also important to check the performance (with regard to average position and CTR) over time. Although it might take a while to completely gain the old rankings with the new location, you can ensure the new site pages are ranking for the correct keywords and result type (image, video, AMP and so on), especially if you have consolidated the content from one or more pages onto a single page. You’re basically looking to ensure that rankings show a positive trend over time, making certain that there are no gaps in mobile vs. desktop performance or ranked pages.

It is also important to establish a group of your own relevant queries to track and use other rank trackers — such as SEOmonitor, Advanced Web Rankings, SERPwoo and Accuranker — for both the old and new web locations. Your tool should provide day-to-day data updates since Google Search Console has usually a two- or three-day delay in performance data.

In addition, you can track queries which might not be included in the Google Search Console if they are not top-ranked terms but you are targeting them in the SEO process.

When using rank trackers, you will need to have flexibility in the different groups you create to monitor rank behavior. From ranking spread, those queries ranking in the top three to 10 positions to website categories and products, you will want to choose any specific query and see its ranking trend over time. Look in both desktop and mobile results and identify potential ranking behavior issues not evolving as expected in a specific group of pages.

For example, when leaving certain pages in the old web location or when consolidating many old pages into just one, the ranking trend might not be so straightforward. You want to be able to identify this behavior in order to do the proper changes, such as improving the new page’s relevance toward those queries or eliminating/migrating pages you did not move initially.

Ideally, you will also want to track the ranking visibility for relevant keywords against your competitors, to give a context to its organic search visibility trend in the industry before and after the migration.

To complement the monitoring of your most relevant and top queries, you might also follow up with the overall organic search visibility of old and new pages by using external ranking indexes such as Sistrix, Searchmetrics or SEMrush. Doing so helps identify an inconsistent ranking behavior, both in the old and new web location, for terms you are not directly tracking.

When selecting which “external” ranking source to use, find which source has provided consistent historical data with your own organic traffic. It is almost impossible to have complete accuracy, so you might want to use one that has shown a consistent trend over time.

Additionally, not all tools will provide the same degree of data refresh, since many update their rankings on a weekly basis. Their flexibility or granularity to track certain types of web properties differs, so, depending on the type of migration you are monitoring, check if they track HTTP vs HTTPS URLs independently, as well as directories and subdomains for both mobile and desktop results.

Old vs. new web traffic

Last but not least, you need to monitor the evolution of your organic search traffic and the conversions coming from the old and new web locations before and after the migration takes place.

You can use a Google Analytics organic report and easily compare the traffic data after the migration with reports from before in case you have done a migration of the same website or are using the same property identification number.

Check for traffic correlations with the organic search rankings, not only in the overall trend but also in the top pages receiving traffic. Look for specific behavior of users such as bounce rate, time on site, conversion rate and revenue coming from them — all should be the same. And as a bonus, they may get better if page design and optimization are improved after the rankings are regained.

You should be able to integrate your organic traffic data with your tracked keyword rankings to see the traffic and conversions impact. A tool such as SEOmonitor can help with this.

Although setting a tracking system to monitor the crawling, indexing, traffic and conversions from an old site to a new one can take time and resources, it will help you understand and quickly identify issues to fix during the migration process.

The post Monitoring web migrations: A checklist for moving from one site to another appeared first on Search Engine Land.

]]>
All Google Manual Penalties Explained from SMX East /google-manual-penalties-explained-smx-east-285607 Thu, 26 Oct 2017 17:10:04 +0000 /?p=285607 Eric Enge shared insights at the "All Google Manual Penalties Explained" session. 

The post All Google Manual Penalties Explained from SMX East appeared first on Search Engine Land.

]]>

How can you identify if your site is suffering from a manual penalty?

Google’s manual penalties differ from algorithmic updates such as Panda for content or Penguin for links. Those updates can also be perceived as penalties since sites also tend to lose their organic search visibility as a result. However, Google manual penalties are triggered by a manual change to your site. They’re done by Google’s reviewers, who will assess your site following Google’s quality guidelines. 

Stone Temple Consulting’s Eric Enge discussed how to identify and address manual penalties during a session at SMX East in New York City this week.

If the reviewers find that your site is not following Google’s quality guidelines, you’ll receive a manual action notice in the relevant site’s Google Search Console, explaining how the site is not compliant with the guidelines and whether this is happening in certain pages or at a site level.

Manual penalties are usually sent for the following actions:

  • Unnatural links to your website.
  • Unnatural links from your website.
  • User Generated Content spam.
  • Hacked website.
  • Pure spam.
  • Spammy structured markup.
  • Hidden text or keyword stuffing.
  • Thin content with low or no added value.
  • Cloaking or sneaky redirects penalty.
  • Spammy freehosts.

How can you get out from a manual penalty?

The “thin content pages” penalty is one of the most common content-related penalties, shown usually for sites with:

  • business listings that have no specifically relevant and useful information in them, which are mostly ad-targeted. 
  • pages featuring curated links with no additional content in them.  
  • doorway pages whose only purpose is to obtain search traffic and convert it, with little added value, often poorly linked from the other site pages. 

If your site suffers from this or another type of content quality penalty, the initial step is to find your poor-quality pages. Then you need to decide whether to improve their content so those page start featuring specifically relevant and unique text content that delivers value to the user or to prevent their getting indexed by 301-redirecting or canonicalizing the poor-quality pages to better content page versions or noindexing the weak pages with a meta robots noindex, follow tag. 

In the case of the link-related penalties, some of the unnatural links that can cause trouble are:

  • web directories.
  • article directories.
  • international links.
  • bad anchor text mix.
  • coupon codes.
  • poor quality widgets.
  • affiliate links.

Ideally, you should work on a day-to-day basis to avoid suffering a penalty. In the case of the link-related ones, you should prune your bad links by using various link resources. Categorize and analyze them in order to identify those that are very low-quality and/or following an unnatural pattern.

Only eliminate unnatural links that are really hurting your site. Although certain tools can help with this process, to avoid further errors, it is critical to understand that this process shouldn’t be completely automated.

One of the most common issues in the link-pruning process is finding that certain links cannot be removed from the sites where they have been placed. In this case, you can use Google’s own disavow tool, which can be found in the Search Console.

After you take these actions to make sure your site’s page content and links now comply with Google’s quality standard, you should then submit a reconsideration request, for which you are reminded to:

  • be respectful.
  • be brief and to the point.
  • acknowledge what you have done.
  • clearly state that you intend to follow Google’s Guidelines from now on.

If you want more details about Google’s manual penalties, read Search Engine Land’s own Google Penalties Guide.

The post All Google Manual Penalties Explained from SMX East appeared first on Search Engine Land.

]]>
5 must-do technical SEO audit items in 2017 /5-must-technical-seo-audits-validations-2017-273207 Thu, 22 Jun 2017 19:17:10 +0000 http:/?p=273207 As the search world evolves, so must your technical SEO audits. Columnist Aleyda Solis discusses some new items to add to your audits in order to stay current.

The post 5 must-do technical SEO audit items in 2017 appeared first on Search Engine Land.

]]>

In recent months, we’ve seen many important technically focused Google announcements, such as an update on JavaScript crawling support, the migration toward mobile-first indexing, the release and extended support of AMP in search results and the expansion of search results features, from rich snippets to cards to answers.

As a result, a number of technical items must be taken into consideration when doing an SEO audit to validate crawlability and indexability, as well as to maximize visibility in organic search results:

1. Mobile web crawling

Google has shared that a majority of its searches are now mobile-driven and that they’re migrating toward a mobile-first index in the upcoming months. When doing a technical SEO audit, it is now critical to not only review how the desktop Googlebot accesses your site content but also how Google’s smartphone crawler does it.

Mobile Crawling

You can validate your site’s mobile crawlability (errors, redirects and blocked resources) and content accessibility (Is your content correctly rendered?) with the following technical SEO tools:

  • Google page-level mobile validators: Google’s Mobile-Friendly Test and Search Console’s “Fetch as Google” functionality (with the “Mobile: Smartphone” Fetch and Render option) are the simplest and best ways to check how Google’s mobile crawler “sees” any given page of your site, so you can use them to check your site’s top pages’ mobile status. Additionally, Google Search Console’s “Mobile Usability” report identifies specific pages on your site with mobile usability issues.
  • SEO crawlers with a ‘Smartphone Googlebot’ option: Most SEO crawlers now offer the option to specify or select a user agent, allowing you to simulate Google’s mobile crawler behavior. Screaming Frog SEO Spider, OnPage.org, Botify, Deepcrawl and Sitebulb all allow you to simulate the mobile search crawler behavior when accessing your site. Screaming Frog also lets you view your pages in a “List” mode to verify the status of a specific list of pages, including your rendered mobile pages.
  • SEO targeted log analyzers: Last year, I wrote about the importance of doing log analysis for SEO and the questions that this would allow us to answer directly. There are log analyzers that are now completely focused on SEO issues, such as Screaming Frog Log analyzer (for smaller log files), Botify and OnCrawl (for larger log files). These tools also allow us to easily compare and identify the existing gap of our own crawls vs. what the mobile Googlebot has accessed.

Mobile Crawling

If you want to learn more about Mobile-First SEO, you can check out this presentation I did a couple of months ago.

2. JavaScript crawling behavior & content rendering

Three years ago, Google announced they were now able to execute JavaScript in order to better understand pages. However, JavaScript tests — like this recent one from Bartosz Goralewicz or this one from Stephan Boyer — have shown that it depends on the way it’s implemented and the framework that’s used.

It’s then critical to follow certain best practices, with a progressive enhancement approach to keep content accessible, as well as to avoid others, such as the former AJAX Crawling proposal, and only rely on JavaScript if it’s completely necessary. Indeed, tests run by Will Critchlow also showed results improvements when removing a site’s reliance on JavaScript for critical content and internal links.

When doing an SEO audit, it is now a must to determine if the site is relying on JavaScript to show its main content or navigation and to make sure it is accessible and correctly rendered by Google.

JS Crawling Configuration

At the page level, you can verify JavaScript crawling behavior and rendering again with Google Search Console’s “Fetch as Google” functionality, or with Chrome’s DevTools, by checking any page DOM with the elements panel and comparing it with what Google’s shows in its cache version, as shown in the screen shot below.

JS Crawling Indexing

For site-wide JavaScript crawling validation, you can use SEO crawlers like Sitebulb or Screaming Frog SEO Spider (which supports JavaScript Rendering through the “Spider Configuration”). As also shown in the screen shot above, you’ll be able to see how the content is rendered and if any required resources are blocked. For larger sites, you can use Botify, too, which has an on-demand JavaScript crawling option.

3. Structured data usage & optimization

Google SERPs haven’t been the traditional “10 blue links” for a long time, thanks to universal search results’ images, videos and local packs; however, the evolution took the next step with the launch and ongoing expansion of features like rich snippets, rich cards, knowledge panels and answer boxes. These features, according to SERP monitors like Mozcast and RankRanger, are now included in a non-trivial percentage of search results.

This shift means that attracting more clicks and visits through SEO efforts is now achieved not only through ranking well in organic listings but also by maximizing your site’s page visibility through these SERP features. In many cases, obtaining these display enhancements is a matter of correctly implementing structured data, as well as formatting and targeting your content to answer queries, where even modifiers can generate a change.

SERP Features

It has become critical to understand which of your popular and relevant queries can provide you more visibility through these various SERP features. By understanding which content has the opportunity to gain an enhanced display in SERPs, you can optimize that content accordingly with structured data and relevant formatting. You can identify these opportunities with search competition and keywords tools like SEMrushAhrefs and the Moz Keyword Explorer.

Once you identify which of your content you should format and optimize with structured data, you can use Google’s search gallery examples as a reference to do it and verify its implementation with the Structured Data Testing tool (as well as the Google Search Console’s Structured Data and Rich Cards reports).

It’s also important to start monitoring which SERP features you actually start ranking for and their traffic impact, which you can do through Google’s Search Console Search Analytics report with the “Search Appearance” and “Search Type” filters, as well as with rank-tracking tools like SEOmonitor that (as seen in the screen shot below) can look at your competitors, too.

SERP Tracking

4. AMP configuration

Designed to provide a “simpler” HTML version of your pages using optimized resources and its own cache to serve them faster in mobile search results, AMP has become a must for media websites and blogs, as well as sites with mobile speed issues that don’t have the flexibility of improving their existing pages.

AMP is now required to be included in Google’s news carousel, and its presence has “skyrocketed” in Google news. It’s also given preference over app deep links in search results, continues to be expanded through image results, and now will also be supported by Baidu in Asia.

Many sites (especially publishers) have adopted AMP as a result of these recent developments, so it’s key for your technical SEO audit to check a website’s AMP implementation and verify that it complies with Google’s requirements to be shown in search results.

Sitewide validation 

You can configure SEO crawlers to verify your AMP pages’ existence and status code with custom HTML extraction through Screaming Frog, OnPage.org and Botify.AMP Validation - HTML ExtractionAdditionally, as shown below, more SEO crawlers directly support the AMP validation with reports directly notifying of its existence, as SEMrush Site Audit functionality does, or even verifying common issues with its implementation, which Sitebulb does.

AMP Validation by SEO Crawlers

Page-level validation

It’s also recommended to examine certain pages individually for proper AMP implementation. You should review both your most important pages and those you have identified issues with when validating your overall site with SEO crawlers. This will provide you with more information about the existing errors in them, as well as how to fix them.

You can do this page-level validation by directly testing your code with the official AMP Playground to check if they pass the validation, and you can also use Chrome’s AMP Validator to verify that the current page in the browser has an AMP version and notify of errors.

AMP pages can also be validated at a page-by-page level through Chrome’s DevTools, as well as the Official AMP validator and Google’s AMP Test, which, besides specifying if it passes or not, will also point out any issues and show a preview of how the page will look in mobile search results.

AMP Google Validator

Ongoing monitoring

Once you’ve verified that AMP has been properly implemented and any errors have been fixed, you’ll be able to monitor if there are any remaining issues through Google Search Console AMP report. In addition to noting errors in the AMP pages, it will also mark their level of “severity,” showing as “critical” those errors that will prevent your AMP pages from being shown in Google’s search results, specifying which are pages with the issue and noting which you should prioritize to fix.

Besides correctly setting and monitoring analytics, it’s important to directly monitor the visibility of AMP in Google search results, as well as its impact on your site traffic and conversions.

As shown below, this can be done through the Search Analytics monitor in Google Search Console, via the “Search Appearance” filter. You can obtain more information with rank trackers like SEOmonitor, which is now showing when a query is producing an AMP result for your site.

AMP Tracking

If you want to learn more about AMP implementation, take a look at the presentation I did about it a few months ago.

5. HTTPS configuration

Since mid-2014, Google has been using HTTPS as a ranking signal; last year, they also announced that to help users have a safer browsing experience, they would start marking HTTP pages that collected passwords or credit cards as “Not secure.”

It shouldn’t be a surprise that HTTPS migrations started to be prioritized as a result, and now over 50 percent of the pages loaded by Firefox and Chrome are using HTTPS, as well as half of page 1 Google search results. HTTPS has already become critical, especially for commerce sites, and not just because of SEO — it’s also essential to providing a trustworthy and secure user experience.

When doing an SEO audit, it’s important to identify whether the analyzed site has already done an HTTPS migration taking SEO best practices into consideration (and recovered the lost organic search visibility and traffic to pre-migration levels).

If the site hasn’t yet been migrated to HTTPS, it’s essential to assess the feasibility and overall importance of an HTTPS migration, along with other fundamental optimization and security aspects and configurations. Provide your recommendations accordingly, ensuring that when HTTPS is implemented, an SEO validation is done before, during and after the migration process.

To facilitate an SEO-friendly execution of HTTP migrations, check out the steps described in these guides and references:

HTTP to HTTPS Migration Checklist

Here are some of the most important SEO-related validations to make during an HTTPS migration:

  • Make sure you can migrate all of the content and resources — including images, JS, CSS and so on — that you use in your pages. If you’re using subdomains or other properties for CDNs, you will need to make sure to migrate them to start using HTTPS, too — otherwise you will end up having mixed content issues, as you will be showing non-secure content through your HTTPS pages. Be careful if you’re hotlinking, too, as the images might not be shown through HTTPS.
  • Audit your web structure before migrating to make sure you consistently link, canonicalize, 301 redirect and refer in Hreflang and XML sitemaps to the original versions of each URL that you will later migrate. Make sure it will be feasible to update all of these settings to refer and link to the HTTPS URLs consistently, as well as to 301 redirect toward them when the migration happens.
  • Gather your top pages from a visibility, traffic and conversion perspective to monitor more closely when the migration is executed.
  • Create individual Google Search Console profiles for your HTTPS domains and subdomains to monitor their activity before, during and after the migration, taking into consideration that for HTTPS migrations, you can’t use the “change of address” feature in Google Search Console.
HTTPS Validation

Here are some tools that can be very helpful throughout the HTTPS migration process:

  • For the implementation
  • For validation and monitoring
    • To follow up with Google’s “official” crawling, indexation and organic search visibility information activity between the HTTP and HTTPS domains, it’s a must to use Google Search Console individual properties and sets.
    • For SSL Checking & Mixed Content Issues, you can use SSL Shopper, Why No Padlock? and Google Chrome DevTools.
    • To emulate Googlebot for both desktop and mobile, use SEO crawlers like Screaming Frog, OnPage.org, Botify, Sitebulb, Deepcrawl and SEMrush Site Audit, which features a report about HTTPS implementation showing the most common issues.
    • To verify the direct crawling activity from the Googlebot in your HTTP and HTTPS URLs, use log analyzers like Screaming Frog Log Analyzer, Botify or OnCrawl.
HTTPS Implementation

If you want to learn more about best SEO practices for HTTPS implementation, take a look at the presentation I did about it a few months ago.

Start your audits!

I hope these tips and tools help you to better prioritize and develop your SEO audits to tackle some of the new and current most important issues and opportunities.

The post 5 must-do technical SEO audit items in 2017 appeared first on Search Engine Land.

]]>
Chrome’s DevTools for SEO: 10 ways to use these browser features for your SEO audits /chromes-devtools-seo-10-ways-use-seo-audits-266433 Thu, 05 Jan 2017 15:08:45 +0000 http:/?p=266433 If you're conducting an SEO audit, you should probably be using DevTools in Google Chrome. Columnist Aleyda Solis shares 10 ways to use these tools to identify and fix SEO issues.

The post Chrome’s DevTools for SEO: 10 ways to use these browser features for your SEO audits appeared first on Search Engine Land.

]]>

Although many of us in the industry pay hundreds or even thousands for SEO software that helps to automate and simplify our everyday work, we already have one of the most handy free SEO tool sets in the market with Google Chrome’s DevTools. Built into Google Chrome, this set of web authoring and debugging tools allows us to validate some of the most fundamental and critical SEO aspects of any page.

In most cases, using DevTools is pretty straightforward. But there are a few very handy and not so obvious applications for SEO analysis. Let’s go through them now.

Start by navigating to the page you want to investigate in Google Chrome and opening DevTools. You can do this by selecting More Tools > Developer Tools from the Google Chrome menu in the upper right-hand corner of your browser window.

google-chrome-devtools

Alternatively, you can use the keyboard shortcuts Ctrl+Shift+I (Windows) or Cmd+Opt+I (Mac), or you can right-click on the page element you’d like to investigate and click “Inspect” from the drop-down menu that appears.

1. Check your page’s main content elements, including JavaScript-rendered ones

Google is now able to crawl and index JavaScript-based content, but it doesn’t always do so flawlessly. Therefore, it is a must to verify the main content elements of your pages — not only by reviewing their HTML source code, but also by checking their DOM to see how the information will be actually rendered and identified by Google.

You can directly see the DOM of any page with DevTools in the “Elements” panel.

chrome-devtools-elements-tab

Use this to review the content of any of the page’s elements (such as the title, meta description or page text) to verify that the desired relevant information is shown in each case, to make sure it’s accessible to Google.

For example, in the upper left screen shot below, we can see how the displayed main text (in red) is included in a <div>. But this is actually implemented with JavaScript, the code for which can be seen in the upper right screen shot below, which shows the page’s HTML source code.

Javascript Content Rendering

[Click to enlarge]

When identifying these differences between the content shown in both, you can then take the appropriate steps to verify it’s being indexed correctly: reviewing the page cached version in Google, verifying if the text is being shown in Google’s results when searching for it, using the “Fetch as Googlebot” in Google Search Console and so on.

2. Look for CSS hidden content

It’s known that Google “discounts” text content that, although included in the page’s HTML source code, is hidden to the user by default with tabs or menu options that need to be “clicked” to view or expand them.

Among the most common ways to hide text is by using CSS with the “display:none” or “visibility:hidden” properties, so it’s recommended to check if there’s any important information on your site pages that might be hidden via these means.

You can do this with Chrome’s DevTools “Search” drawer, which you can access by hitting Ctrl + Shift + F (Windows) or Cmd + Opt + F (Mac) when DevTools is open. This search feature will allow you to look not only through the opened page file itself but across all of the used resources, including CSS and JavaScript.

search hidden content styles

Here, you can search for the desired properties, in this case “hidden” or “display:none” to identify if/where they exist within the page code.

Click on a search result to view the full code surrounding the property you’ve searched for. Review the code to see which text is being hidden; if it’s important content, check if it’s being discounted by Google, and take the appropriate actions.

3. Verify your images ALT descriptions

Besides checking your pages’ main text content elements, you can also verify your images ALT descriptions in the “Elements” panel. Right-click on any image and select “Inspect,” as shown here:

Verify Images ALT Descriptions

[Click to enlarge]

4. Identify no-indexation & canonicalization configuration in pages & resources

You can also use Chrome’s DevTools to inspect your pages’ technical configuration. For example, you can validate the meta robots and canonical tags configuration, which can be done by using the “Search” drawer to look for these specific tags and review how they’re implemented.

devtools-search-canonical

This validation can be done not only for the implementation of these tags in the HTML <head> area, but also in the HTTP headers, by going to the “Network” panel and selecting the desired page or resource to verify their header information, including the existence of a link rel=”canonical” in an image file, for example, as shown below.

devtools-network-headers

5. Look for the HTTP status in the header configuration

When checking the header configuration of the pages and resources using the “Network” panel, you can also verify their HTTP status and see if there are any redirects, which type of redirects they are, and error statuses, as well as the inclusion of other configurations such as the x-robots-tag, hreflang or the “vary: user agent” ones.

HTTP Headers Validation

6. Validate your mobile web optimization by emulating any device

Mobile-friendliness is now essential for SEO, and you can validate page configuration and content in mobile using the DevTools “Device Mode.” This option can be enabled by clicking the device icon in the upper left-hand area of the panel or by pressing Command+Shift+M (Mac) or Ctrl+Shift+M (Windows, Linux) while the DevTools panel is open.

devtools-device-toolbar

Once in Device Mode, you will be shown viewport controls in the upper area of the window. Here, you can select a responsive viewport (freely resizable) or choose a specific device to test your page with. You can also add your own custom device using the “Edit…” option.

Emulate Mobile Devices

7. Assess your page load time & identify speed issues

Analyze any page load time by emulating the network conditions and device used. For this, you can go to the “Network Conditions” tool, which you can access from the Customization menu in the upper right-hand corner of the panel under “More tools.”

devtools-network-conditions

Here you will find the “Caching,” “Network throttling” and “User agent” configurations.

Emulate Network Conditions

Once you’ve chosen your settings, you can reload the desired page and go to the “Network” panel to see not only the page’s full load time (Load) but also when the initial markup of the page has been parsed (DOMContentLoaded), both of which appear at the bottom of the window. You can also view the load times and sizes of each one of the used resources, which can be recorded by clicking on the red button at the left side of the panel.

To get recommendations of actions to take in order to improve the page speed performance, navigate to the “Audits” panel and click the “Run” button. Each recommendation will specify the related resources causing issues, as can be seen below:

Network insights resources & Recommendations

Additionally, you can capture screen shots of your page rendering with the “Filmstrip” option. By clicking on the camera icon and reloading the page, you can view screen shots of your page across various stages of loading, which allows you to verify what your users can see at every stage.

filmstrip screenshot load time

8. Identify render-blocking resources

You can also use the resources load times data in the “Network” panel to identify which JS and CSS resources are loading before the DOM and potentially blocking it, which is one of the most common page speed issues. You can filter by CSS and JS by clicking the buttons for them (as shown below).

Render Blocking Resources

9. Look for non-secure resources during HTTPS migrations

Chrome DevTools can be very helpful during HTTPS migrations, as it allows you to identify security issues in any page with the information provided in the “Security” panel. You can see if the page is secure and has a valid HTTPS certificate, the type of secure connection, and if there are mixed content issues due to non-secure origins of any used resource.

Secure Pages & Resources Validation Chrome DevTools

10. Validate your Accelerated Mobile Pages (AMP) implementation

If you’re implementing AMP, you can see if there are any issues with your pages by adding “#development=1” string to your AMP URL, then checking in the “Console” panel. This will tell you if there are any issues, warnings or errors — and in which element and line of code these have been found so you can take the appropriate action to fix them.

AMP Pages Validation

Bonus: Personalize Chrome DevTools settings

As you can see, you can use Chrome DevTools for SEO in so many ways that you will likely spend some time in it. Thus, you might want to personalize it a bit based on your own preferences. You can adjust the appearance, enable shortcuts and more.

You can do this in the “Preferences” settings, which can be found by clicking on the Customization menu button in the upper right-hand corner of the panel and choosing the “Settings” option.

Chrome DevTools Preferences

Last but not least, speaking of personalization of preferences: Analyzing minified code can make it difficult to parse, so make sure to click on the pretty print “{}” button in the central bottom area of the panel so you’re able to see a non-minified version. This will allow you to go through it easily due with improved readability.

PrettyPrint Chrome DevTools

I hope these Chrome DevTools tips help to make your SEO life easier!

The post Chrome’s DevTools for SEO: 10 ways to use these browser features for your SEO audits appeared first on Search Engine Land.

]]>
Ready for Black Friday 2016? 3 critical e-commerce SEO configurations that help you avoid common issues /ready-black-friday-2016-3-critical-e-commerce-seo-configurations-avoid-common-issues-262040 Wed, 16 Nov 2016 15:23:49 +0000 http:/?p=262040 Columnist Aleyda Solis lays out critical tips for configuring your e-commerce website for SEO success this holiday season.

The post Ready for Black Friday 2016? 3 critical e-commerce SEO configurations that help you avoid common issues appeared first on Search Engine Land.

]]>

With the most important season for online retailers coming, it’s critical to make sure that your online store is optimized so you’re ready to make the most out of it. Last year, online shoppers spent $4.45 billion on Black Friday and Thanksgiving Day alone. And according to Custora’s E-Commerce Pulse, in September of this year, 22 percent of e-commerce orders in the US were driven by organic search traffic (vs. 20 percent coming from email and 19 percent from PPC):

Organic Order in Ecommerce

Coincidentally, in Econsultancy’s Technology for Ecommerce Report 2015 (based on a survey of 600+ client-side and agency e-commerce professionals), “in-built SEO capabilities” was stated to be one of the most critical functionalities to be taken into consideration when selecting an e-commerce solution:

seo-functionality-ecommerce-min

The reality is that although many e-commerce platforms come with SEO-related functionalities built in, these won’t be optimized by default.

These functionalities will help ease the optimization process, but they will need to be manually configured based on your site architecture, product characteristics and content needs to effectively target your users’ search behavior.

Here are three of the most fundamental SEO aspects to configure in your e-commerce platform:

1. Allow only the crawling of URLs that you want to index & rank to avoid crawl budget issues

Among others, there are two very common scenarios in e-commerce platforms that cause crawl budget problems:

  1. Generating more than one URL for the same page content. For example, perhaps you have the same product appearing within multiple product categories, so you end up having multiple URLs for each instance of the product page instead of just one. Due to this, you will need to no-index or canonicalize those additional URLs to the “original” product URL version to avoid having content duplication issues.
  2. Crawlable URLs by default for every existing filter in listing pages. For example, in categories listings, visitors may have the option to sort the existing list of products by criteria like size, color, popularity or pricing. This generates specific and crawlable URLs for every combination, most of them showing the same or very similar content, and they will need to end up being no-indexed.

For example, the following filtered listing URL is being canonicalized to the main one, without parameters:

canonicalized-filtered-page

Although this is good for avoiding content duplication or cannibalization issues, these URLs usually remain crawlable, which doesn’t help to solve crawl budget issues. It’s also important to prevent the crawling of these pages that are not meant to be indexed (or ranked); otherwise, we will end up having an scenario like this:

crawl-budget

This e-commerce site has only 1.6 percent of active URLs bringing organic search traffic to the site, from a total of 200,000 crawlable ones. A non-trivial amount of crawled URLs are indexable (compliant) but are not bringing any organic search traffic. Worse, 89 percent of the crawled URLs are non-indexable and without organic search visits (“not compliant not active” in the graph above).

Most of these “not compliant not active” URLs are canonicalized or redirected to others, as can be seen in the graph below, and the ones that are “compliant” (indexable and meant to be ranked) are found very deep in the site architecture, making them harder to reach.

non-indexation-crawling-causes

Another reason so many non-indexable pages are crawled is the automated generation of XML sitemaps by the platform itself, which might include all of the generated URLs (including those that are non-indexed, redirected, canonicalized to others) instead of only those which are meant to be indexed.

xml-sitemaps-optimizaion

It’s then critical to configure the XML sitemaps in e-commerce platforms to only include the original URLs of each page — the ones that will be indexable and rankable, the ones we want our users to share, link to and arrive at. We will need to refresh these XML sitemaps as often as the content is updated.

It’s also fundamental to only link to the original URL version of each page internally. If we still need to allow users to access non-indexable pages, like those generated by sorting filters, it’s then important to implement them in a way that they won’t be crawled. This can be achieved in many ways, like using hash (#) parameters instead of query (?) ones to generate these filtered URLs if possible.

The North Face does just that in this example for their listings sorting:

non-crawlable-filtered-pages

Another alternative is linking to these non-indexable or canonicalized filtered listings with a non-crawlable script, as in this example:

non-crawlable-canonicalized

On the other hand, in order to provide a way for users to navigate through the different categories and levels of products in a relevant way — one that also prioritizes the accessibility of the original URLs that are meant to be indexed and ranked — it’s important to make sure that we always link up, sideways and down:

  • Up to the main category levels of the site, as well as to the upper hierarchical levels of where we are with a breadcrumb.
  • Sideways to related and sister product lines or subcategories of the current category level.
  • Down to the next level of product segmentation, more granular sub-categories or filters of the current ones, as well as the specific products
ecommerce-navigation

By doing this, we will eliminate the excess of “not meant to be indexed” URLs and improve the crawling behavior and performance of the site, as well as likely improving the performance of those pages that are meant to be indexed and ranked by properly prioritizing them:

active-indexable-urls-increase

2. Set indexation rules based on supply & demand to avoid duplication, cannibalization and index bloat issues

Besides crawl budget, content is another important consideration for e-commerce SEO issues; specifically, we need to watch out for content duplication, cannibalization and thin content.

Although we might want to control the generation of content duplication by making sure that there’s only one indexable (and crawlable) URL per page, sub-category and category, the reality is that we will likely end up in the following situation: a high share of pages featuring very little text content, as well as very similar content between the indexable pages:

content-issues

The reason for this is that we will likely have category pages that will show similar products to their subcategory pages, as in the example below. (This is especially true when there are many subcategories and not so many products.)

overlapping-categories

We may also have subcategory pages with a lot of overlap between them in terms of the products they offer. However, due to the existing product lines, offerings and user behavior, it’s necessary to keep all these similar pages.

The best way to avoid the cannibalization (and overlap) of content between the different levels of our site in a structural, scalable way is by organizing their content differently based on their level of granularity (usually determined by URL depth). This allows us to match page content with user behavior.

Amazon, for example, is great at this. Take a look at what happens when a user drills down from the home page all the way down to a specific product (in this case, a bicycle):

content-levels-organization

Of note here:

  • The general Cycling category page. This is organized as a “hub” page (instead of a listing of products), allowing the user to browse through the very different product sub-categories available (bikes, tubes, helmets, accessories, footwear, hydration and more). Amazon understands that a user who navigates to that page will likely want to filter the search and browse through more specific options in order to buy something.
  • A specific bike sub-category. This page is organized as a list, as it features the same type of products — those of the same type and similar characteristics, fulfilling a specific criteria used for the category, where it really makes sense to “showcase” them directly to the user to buy.

By differentiating the site levels like this, we not only reduce the possibility that these main categories’ content will overlap with (and cannibalize) the subcategories ones, but will also likely:

  1. improve the crawling of the different levels of sub-categories and main products; and
  2. provide a better user experience by organizing the content in a way that matches the search intent.

Another way to differentiate the content of similar categories, as well as to increase their topical relevance, is to feature specific text descriptions on each one of them. This text should not challenge the listing conversion by obstructing the product views (which is one of the main e-commerce concerns I usually see) but actually support it by providing relevant information to users that complements the text shown on individual product pages.

descriptive-text-listings

A usual argument against creating specific descriptions for category pages is their related cost — there are so many of them! The solution is to selectively create this content based on “supply and demand” to make a decision that is profitable, using the same flowchart that we can also follow to determine page indexation priority:

  • Is the audience searching for it? If so, does the search interest around it compensate the indexation and content creation efforts and costs?
  • In that case, are there enough relevant products to feature on that specific page that will fulfill the users’ needs?

If the answers to these questions are “yes,” then it’s likely that content development and indexation for these pages will be profitable and shouldn’t be seen as a cost, but an investment.

faceted-pages-indexation

Another very common content duplication and overlapping scenario is the one caused by internal search engines whose result pages are improperly configured. These pages will often show the same content as category pages, fulfilling also the same intent.

category-vs-internal-page-result

The best way to handle this scenario is to have an internal search engine that searches through the already existing static category and subcategory pages that are part of the site navigation system. When it identifies a match, instead of generating an independent URL with the results, it refers the user to the already existing category or subcategory page. This helps to avoid generating new URLs that will show the same content that already exists elsewhere on the website.

Along these lines, it’s also important that the internal search engine tracks the searched queries so that you can glean helpful insights about user behavior. If certain keywords or phrases are searched often, but there isn’t a matching category or subcategory, an alert can be sent of a potential new category, subcategory or filter page that can be enabled. From there, follow the previously shared flowchart to decide if it’s worth implementing.

If this is not possible, then an easy but less elegant fix is to noindex, follow as well as blocking the crawling of the internal search results pages, although this will mean that at some point your users might decide to share and link those “non-indexable” pages. Although they might be passing their link popularity value to other indexable pages, it will be a missed opportunity to capitalize the relevant category page.

noindexation-search-results

Moving to a more granular level, product pages also have common content issues — specifically those related to very similar products that end up showing almost identical descriptions. In most cases, this doesn’t only happen internally at the site level, but across the content of other e-commerce sites also featuring exactly the same products.

This can be avoided by emphasizing the specific characteristics that differentiate each product, by featuring them in the main content elements of the site: title, meta description, headings and so on. For example:

similar-products-pages

Additionally, the best way to not only differentiate but enable unique, fresh content in product pages that supports users in making their buying decision is by incentivizing user-generated content, like Q&As from users and reviews and ratings. These, if implemented using the relevant structured data, will also get additional visibility in Google search results with rich snippets, like eBags does here:

product-pages-reviews

If you want to see which of your competitors are already making the most out of these, you can use SEMrush SERP features reports and obtain the queries for which they are ranking with them (as well as the related position and URLs).

serp-features-semrush

3. Set elimination rules for non-available products, listings & campaign landing pages to keep their value

A very particular characteristic of e-commerce sites is how seasonality impacts their content, from products to campaigns.

Let’s start with campaign landing pages. They are often seasonally oriented (Black Friday, Christmas, New Year’s, Valentine’s Day and so forth) and are usually reused from time to time. The problem is that due to their seasonal nature, they are sometimes not fully capitalized from an SEO perspective, despite the fact that they can achieve a very high level of traffic.

The most common mistake is to just eliminate these pages directly after their relevant campaign time has passed, as shown below with the Black Friday landing page of carrefour.es (compare that to Amazon, which doesn’t suffer from this issue):

lost-landing-pages-errors

If we use the Internet Archive’s Wayback Machine, we can see how this Carrefour Black Friday landing page was used in 2014 and is now showing a 404 Error page, which is a pity, since it still has 31 links from 13 domains pointing to it:

error-page-black-friday

The best way to handle recurrent campaigns is by leaving their campaign landing pages always published and simply updating the content to explain that the offer is not currently available but will be again once the season comes around again. Try offering a countdown to it, allowing the user to get an alert to come back and see other available offers at the time.

Amazon does this, as illustrated below with a (surprise, surprise) already optimized landing page for “Black Friday 2016.” This is a good practice with seasonally driven campaigns: use the year along with the keyword, as people search with it in order to find the offers relevant for the current year.

black-friday-landing-page-2016

Another very common scenario is the one that happens when products run out of stock. Often times, these product pages are just eliminated from the site, showing a 404 error.

The best way to handle out-of-stock product pages, in case they might be back in the future, is to leave the page as indexable and add an “out of stock” notice, as well as a form to be alerted when the product is back in stock, while offering similar alternatives. That way, you will be able to keep the potential rankings that the page might have earned over time while providing a good user experience.

out-of-stock-product

But what should you do in case the product is permanently out of stock, or the landing page is just for a one-time campaign and the pages won’t be reused? The best way to handle that is by 301-redirecting them to their parent category, so the relevance is kept and users can continue browsing through other similar offerings on the site:

ecommerce-product-elimination-flowchart

Too much work and resources?

These are only the fundamentals. That’s why it’s important to have a flexible e-commerce platform that facilitates these implementations.

Remember that it’s all about the results, and if you have done a good job with SEO, you’ll be able to directly see these kinds of results:

ecommerce-results-min

This is the ultimate goal of an e-commerce SEO process: to help you grow your sales and profit.

The post Ready for Black Friday 2016? 3 critical e-commerce SEO configurations that help you avoid common issues appeared first on Search Engine Land.

]]>
7 fundamental technical SEO questions to answer with a log analysis (and how to easily do it) /7-fundamental-technical-seo-questions-to-answer-with-a-log-analysis-and-how-to-easily-do-it-245903 Tue, 05 Apr 2016 14:49:38 +0000 http:/?p=245903 According to columnist Aleyda Solis, a thorough analysis of your server logs can help uncover SEO insights you might not have gleaned otherwise.

The post 7 fundamental technical SEO questions to answer with a log analysis (and how to easily do it) appeared first on Search Engine Land.

]]>

Log analysis has evolved to become a fundamental part of technical SEO audits. Server logs allow us to understand how search engine crawlers interact with our website, and analysis of your server logs can lead to actionable SEO insights you might not have gleaned otherwise.

First: Choosing your tools

Many tools are available to help with a server log analysis, and which one is right for you will depend on your technical knowledge and resources. There are three types of log file analysis tools that you will want to consider (unless you’re doing it from the command line, which I wouldn’t recommend if you don’t already have experience with it):

Excel

If you know your way around Excel — if creating pivot tables and using VLOOKUP is second nature to you — you might want to give Excel a go by following the steps shown in this guide by BuiltVisible.

It’s important to note, too, that even if you use one of the other tool options, at some point you will need to export the data you have collected into Excel. This will output the data into a format that’s easy to integrate or compare with other data sources such as Google Analytics or Google Search Console.

Whether you use Excel throughout the entire analysis or only at the end will depend on how much time you want to spend using it to filter, segment and organize the data.

Open source tools

This is your choice if you don’t have a budget to spend on tools but you do have the technical resources to configure them. The most popular open source option is Elastic’s ELK stack, which includes Kibana, Elasticsearch and Logstash.

Paid tools

This is really the best option if you don’t have technical support or resources, especially as these tools are pretty straightforward to set up. A few options also support cURL in case you need to manually upload the log files (instead of connecting directly to the server to monitor):

  • Splunk is probably the best-known paid log analyzer in the market, although it’s not the cheapest option. However, it has a light version that is free that you might want to check out.
  • Logz.io offers ELK as a service (It’s based in the cloud), has considered SEO as one of their use cases, and has a free option, too.
  • Loggly also has a limited free version. This is the one I use at the moment, after having tried the other ones, and it is the program you will see in my screen shots throughout the piece. Loggly is based in the cloud, and I really like its easy-to-use interface that facilitates easy filtering and searching. This functionality allows me to save my time for the analysis instead of segmenting and filtering data.

Once you have chosen the best tool for your case, it’s time to start with the analysis. You’ll want to ensure that you focus your analysis on actionable SEO items, as it’s easy to get lost in the non-SEO-oriented environment.

Here are a few questions that help me drive log analysis, and how I easily answer them (using Loggly, in my case). I hope this will allow you to see how you can also go through your logs in a non-painful way and analyze them for your own SEO process.

1. What bots access your site? Look for spambots or scrapers to block.

Logs follow a predefined format. As you can see in the screen shot below, identifying the user agent can be done more easily in Loggly with predefined filters for the log fields.

Which bots access to the site?

[Click to enlarge]

When you start doing log analysis for the first time, it might be worthwhile to not only check out the activity of search bots (such as the Googlebots, bingbots or Yandex bots), but also potential spambots that can generate performance issues, pollute your analytics and scrape your content. For this, you might want to cross-check with a list of known user agents, such as this one.

Look for suspicious bots, and then analyze their behavior. What’s their activity over time? How many events have they had during the selected time period? Does their appearance coincide with performance or analytics spam issues?

If this is the case, you might not only want to disallow these bots in your robots.txt file but also block them via the htaccess, as they won’t often follow the robots.txt directives.

2. Are all of your targeted search engine bots accessing your pages?

Once you have identified the bots that arrive to your site, it’s time to focus on the search engine bots to make sure they’re successfully accessing your pages and resources. With the “userAgent” filter in Loggly, you can directly select the ones you want to analyze or search for them by name with the search functionality using Boolean operators.

Once you’ve filtered to show only the search engine bots you’re interested in, you can select the graph option to visualize their activity over time. Which search bots have the highest level of activity on your site? Do they coincide with the search engines you want to rank with?

For example, in this case, we can see that one of the Googlebots has twice as much activity as one of the Bingbots and had a specific spike at 22:30h on March 24.

Search Bots & Accessed Pages

[Click to enlarge]

What’s important here is not only that the search bots come to your site, but that they actually spend their time crawling the right pages. Which pages are they crawling? What’s the HTTP status of these pages? Are the search bots crawling the same pages or different ones?

You can select each of the search user agents you want to check and export the data to compare them using pivot tables in Excel:

HTTP Status by User Agent

Based on this initial information, we’re going to start digging deeper to verify not only how these bots differ in crawling behavior, but if they really are crawling where they should be.

3. Which pages are not correctly serving? Look for pages with 3xx, 4xx & 5xx HTTP statuses.

By searching for your desired search bot (in this case, Googlebot), and then choosing the “status” filter, you can select the HTTP values of the pages you want to analyze.

I recommend looking for those with 3xx, 4xx and 5xx status codes, as you want to see redirected or error pages that you’re serving to the crawlers.

Error & Redirected Pages for Googlebot

[Click to enlarge]

From here, you can identify top pages generating most of the redirects or errors. You can export the data and prioritize these pages to be fixed in your SEO recommendations.

4. What are the top crawled pages by each of the search bots? Verify if they coincide with your site’s most important ones.

When searching for your desired search bot, you can directly select the “requestURI” filter to get a list of the top web documents, whether resources or pages, that the bot is requesting. You can review these directly in the interface (to verify that they’re featuring a 200 HTTP status, for example) or export them to an Excel doc, where you can determine if they coincide with your high-priority pages.

Top Crawled Pages

[Click to enlarge]

If your most important pages are not among the top crawled pages (or worse, are not included at all), you can then decide on the appropriate actions in your SEO recommendations. You might want to improve internal linking to these pages (whether from the home page or from some of the top crawled pages that you identified), then generate and submit a new XML sitemap.

5. Are the search bots crawling pages that they shouldn’t?

You’ll also want to identify pages and resources that are not meant to be indexed and thus shouldn’t be crawled.

Use the “requestURI” filter again to get a list of the top requested pages by your desired bot, then export the data. Check to see whether the pages and directories you have blocked via robots.txt are actually being crawled.

Pages that shouldn't be crawled

[Click to enlarge]

You can also check for pages which are not blocked via robots.txt but shouldn’t be prioritized from a crawling perspective — this includes pages that are noindexed, canonicalized or redirected to other pages.

For this, you can do a list crawl from the exported list with your favorite SEO crawler (e.g., Screaming Frog or OnPage.org) to add the additional information about their meta robots noindexation and canonicalization status, in addition to the HTTP status that you will already have from the logs.

6. What’s your Googlebot crawl rate over time, and how does it correlate with response times and serving error pages?

Unfortunately, the data that can be obtained through Google Search Console’s “Crawl Stats” report are too generic (and not necessarily accurate enough) to take action. Thus, by analyzing your own logs to identify the Googlebot crawl rate over time, you can validate the information and segment it to make it actionable.

With Loggly, you can select to view Googlebot’s activity in the desired time range in a line chart, where the HTTP status can be independently shown in order to verify the spikes over time. Knowing what type of HTTP requests occurred, and when, will show if errors or redirects were triggered, which could generate a non-effective crawling behavior from the Googlebot.

Googlebot crawl rate

[Click to enlarge]

You can do something similar by plotting the size of the files requested by the Googlebot through a desired period of time in order to identify if there’s a correlation with crawling behavior changes, and then you can take the appropriate actions to optimize them.

7. What are the IPs Googlebot is using to crawl your site? Verify they’re correctly accessing the relevant pages and resources in each case.

I’ve included this one specifically for websites that serve different content to users in different locations. In some cases, such websites are unknowingly providing a poor experience to crawlers with IPs from other countries — from blocking them outright to letting them access just one version of the content (preventing them from crawling other versions).

Google now supports locale-aware crawling to discover content specifically meant to target other countries, but it’s still a good idea to ensure that all your content is being crawled. If not, this may indicate that your website is not properly configured.

After segmenting by user agent, you can then filter by IP to verify that the site is serving the right version of each page to the crawlers coming from the relevant countries.

Googlebot IPs

[Click to enlarge]

For example, take a look at what happens when I try to access to the NBA site at www.nba.com with a Spanish IP — I am 302 redirected towards a subdomain about basketball from the AS Website (a local sports newspaper in Spain), as can be seen in the screen shot below.

NBA Redirect Location

Something similar happens when I enter from a French IP; I get 302 redirected to the basketball subdirectory of L’Equipe, a local sports newspaper in France.

IP Location Based Redirect

I’ve explained in the past why I’m not a fan of internationally targeted automatic redirects. However, if they’re meant to exist for business (or any other) reasons, then it’s important to give a consistent behavior to all crawlers coming from the same country — search bots and any other user agents — making sure that SEO best practices are followed in each case.

Final thoughts

I hope that going through these questions — and explaining how they can be answered using log analysis — will help you to expand and strengthen your technical SEO efforts.

The post 7 fundamental technical SEO questions to answer with a log analysis (and how to easily do it) appeared first on Search Engine Land.

]]>
Discovering Opportunities To Drive Your Mobile Web & App Optimization Strategy /discovering-opportunities-drive-mobile-web-app-optimization-strategy-237625 Tue, 12 Jan 2016 15:25:18 +0000 http:/?p=237625 When it comes to optimizing your mobile presence, where do you begin? Columnist Aleyda Solis shares the questions you should ask yourself to get started.

The post Discovering Opportunities To Drive Your Mobile Web & App Optimization Strategy appeared first on Search Engine Land.

]]>

With the steady rise of mobile search, SEO practitioners these days need to consider how to optimize both their websites and apps for mobile search visibility.

Google’s mobile-friendly algorithm update, released in April 2015, gave a boost in mobile search results to pages with good mobile user experiences. Along those lines, Google has sought to increase the visibility of app content within mobile search results through app indexing. Thus, it’s key to have a strategic approach to our mobile optimization efforts — both on our websites and apps.

The now more mature mobile web and app optimization tools make this analysis possible, and straightforward, too. For example, some of the tools that I use (which I’ll be discussing in this post) are as follows:

Use the tools above (or your preferred alternatives) to answer the following mobile web and app search-related questions — doing so will help you identify more opportunities and potential, as well as establish an effective optimization strategy:

1. Which Are Your Industry’s Top Mobile Web & App Competitors?

Let’s start with the fundamentals by identifying your competitors in mobile web and app search visibility. Are your competitors the same for both mobile web and apps? Are they the same as your desktop search competitors?

To identify your mobile web competitors, you can use SEMrush or SISTRIX. Both have a “Competitors” report that includes a “Mobile” or “Smartphone Data” segment:

Mobile Web Search Competitors

Top: SEMrush/Bottom: SISTRIX

Something similar can be done to identify your most important mobile app competitors with SearchMan and Mobile Action, with their respective “Probable Competitors” and “Competitors Insights” reports:

Mobile App Competitors

Top: SearchMan/Bottom: Mobile Action

2. How Does Your Overall Mobile Traffic Performance Compare With That Of Your Competitors?

A question that arises frequently at the beginning of a mobile optimization process is, “What type of traffic can be expected from the optimized mobile presence?”

Although there’s no completely accurate way to answer (as it depends a number of factors such as your own efforts, your competitors’ activities, audience trends, industry seasonality, search platforms, algorithm updates and so on), you can use a tool like SimilarWeb to check the level of overall traffic that your top mobile competitors already have. That should give you an idea of what is achievable if you maximize the visibility of all your channels:

Mobile Web Traffic Potential

Screenshot from SimilarWeb

You can do something similar with your mobile app competitors and look at installs, active users and sessions. This data can also provide a reference to take into consideration:

Downloads Estimation App

Top: SimilarWeb/Bottom: Mobile Action

For apps specifically, it’s good to look at traffic from within app stores, as well. SimilarWeb lets you see this information not only for in-store search (i.e., the app store’s internal search functionality), but also for all in-store related traffic (if it’s a featured app, for example). You can also compare yourself against your competitors:

Screenshot from SimilarWeb

Screenshot from SimilarWeb

3. What Are Your Current Mobile Web & App Search Rankings Versus Your Competitors’?

Once you have identified your competitors and their overall performance, you can dig deeper and analyze rankings. Look not only at your own mobile search rankings and trends, but at those of your competitors, too.

Compare this data easily with the mobile web search filters of SEMrush and SISTRIX:

Mobile Web Rankings

Top: SEMrush/Bottom: SISTRIX

It’s even easier with the “Domain vs. Domain” functionality of SEMrush, to which you can apply the “Mobile” filter (as seen in the following screenshot):

Mobile Web Ranking Competitors Keywords

Screenshot from SEMrush

Although not as straightforward, this can also be done with mobile app store keyword rankings, as well as related search volume and popularity metrics. You can get this information from SimilarWeb, but you can also check out SearchMan and Mobile Action, which give you historical data, as well as competition and difficulty-related scores:

Mobile App Keywords

Top: SearchMan/Bottom: Mobile Action

Besides in-store rankings and keywords, it’s also possible to obtain external search queries from search engines referring traffic to your mobile app (and your competitors) with SimilarWeb’s “Search Engine Keywords” report.

This will give you a much better understanding of the keywords generating visibility to your app presence and how your competitors are leveraging it already:

Mobile App In-Store & External Search Keywords

Screenshots from SimilarWeb

Now is also a good time to identify queries for which your competitors are ranking higher than you in mobile web search results and for which they have lost their rankings. You should strengthen your presence for these queries, and in cases where they have lost rankings, take the opportunity to fill the rankings gap.

You can filter the search data you have already identified, or you can also directly use filters to segment it. For example, SEMrush’s “position changes” report shows the new, lost, improved and declined mobile search keywords per month for any site:

Mobile Web Competitors Ranking Lost

Screenshot from SEMrush

In its “Opportunities” report, SISTRIX offers suggestions of mobile keywords for which you are still not ranking as well as you could, with a high traffic potential and low to medium competition level:

Mobile Web Keywords Opportunities

Screenshot from Sistrix

Similar options are also provided for mobile app keywords using Mobile Action’s “Keyword Detector” functionality, which shows the unique and shared keywords that two apps are ranking for:

Mobile App Competitors Keywords

Screenshot from Mobile Action

AppTweak also has a “Suggested Keywords” function:

Suggested Mobile App Keywords

Screenshot from AppTweak

Use this keyword data to prioritize your mobile web and app content development and optimization efforts.

4. How Does Your Mobile Search Visibility Differ From Your Desktop Search Visibility?

Take the keywords that you have identified in the mobile search analysis above and compare them with the top ones that you have been prioritizing in your SEO efforts until now. Are they different or the same? Use your current organic search traffic engagement and conversion metrics to prioritize keywords that will likely bring you greater benefits.

You can find your current organic search data from mobile and desktop web searches segmented with the help of OnPage.org’s Impact report (which integrates with the Google Search Console).

Mobile vs. Desktop Rankings Difference

Screenshots from OnPage.org

Additionally, SEOmonitor integrates with Google Analytics and Search Console data, providing mobile-segmented reports that include historical mobile rankings per keywords.

These reports make it easy to visualize the change trend, the specific URLs ranking for each keyword in mobile search results, traffic behavior and performance against competitors:

Mobile Web Ranking Trend

Screenshots from SEOmonitor

5. Which Of Your Best-Performing And High-Priority Pages Are Still Not Mobile-Ready Or Mobile-Optimized?

Once you have the keywords that you should target for your mobile optimization efforts, identify which pages you should be optimizing for them, and determine whether or not they are already mobile-friendly.

You can automate this by aggregating the relevant or already ranking URLs for the keywords you want to target and importing them into URL Profiler. From here, you can easily verify if they pass Google’s “Mobile Friendly” validation and their “Mobile PageSpeed” score by integrating with Google’s API.

URL Profiler Mobile Validation

Screenshots from URL Profiler

Additionally, it’s fundamental that you check to see if there are any misconfigurations — such as faulty redirects, blocked JavaScript, CSS and images, not set viewport tag and so on  — in your mobile-oriented website and see that it follows mobile web best practices.

Most SEO crawlers now offer mobile web crawling functionalities. You can use OnPage.org Zoom to verify how the mobile Googlebot crawls your site — if the mobile version is correctly served, if all of the mobile pages are effectively found, if there are any speed issues and so on — and use the Focus functionality to verify the mobile Web settings at a page level:

Mobile Web Crawling

Screenshots from OnPage.org

6. Which Of Your Competitors Are App Indexing, And How Significant Is The Impact? For Which Of These Queries Are You App Indexing, Too?

Besides prioritizing those queries for which you have identified there’s a high interest in both mobile app and web search, you should also take into consideration the ones that your competitors are already targeting and benefiting from.

You can check this by first identifying those apps getting high “external search engine” traffic and the keywords sending this traffic, by using SimilarWeb “External Traffic” and “Search Engine Keywords” reports:

app external traffic sources - app indexing

Screenshots from SimilarWeb

Then you should validate how these competitors are ranking for those keywords in mobile search results. You can easily simulate Android mobile search results by using Chrome’s “Device Mode” as specified here and check if they are app indexing and directly referring to their apps with their presence in search results, how they are doing it and the type of visibility that they’re getting with it:

App Indexing Emulation SERPs

Using Chrome DevTools’ Device Mode to simulate  mobile search results on different mobile devices.

Once you have validated the mobile search results and app indexing visibility for your competitor’s top external search keywords, you can verify if you’re already app indexing for them yourself by checking your app Google Search Console profile, specifically the “Search Analytics” report.

App Indexing Issues

Screenshots from Google Search Console

If you’re already taking these keywords into consideration, how many clicks and positions are you getting from them?

If numbers are low, here are a few things to check:

  • verify how your content is shown in search results;
  • explore whether you’re ranking with the right page;
  • see if the app referral is correctly shown;
  • determine if there are errors that have been identified for them in the Google Search Console “Crawl Errors” report, and
  • verify the app content accessibility with the “fetch as Google” option.
App Indexing Fetching
Google Search Console “Fetch As Google” report for app content

Make sure you’re effectively app indexing the desired content to target your priority queries, to maximize your visibility in search results with them.

Final Thoughts

I hope that these steps, questions and tools help you identify even more mobile web and app search opportunities. Remember to also track and monitor your mobile search rankings and behavior once you have started your strategy implementation.

Besides SEOmonitor and OnPage.org, you can use Wincher to track a high volume of keywords per device and SERPwoo to track the mobile SERPs without focusing on specific competitors. For Mobile App rankings, you can use any of the Mobile App analysis tools mentioned before, along with Google Search Console “search visibility” for app indexing visibility.

Have a successful mobile web and app optimization process!

The post Discovering Opportunities To Drive Your Mobile Web & App Optimization Strategy appeared first on Search Engine Land.

]]>
5 Questions To Select The Right Structure For Your International Websites /5-questions-select-right-structure-international-websites-233301 Thu, 22 Oct 2015 14:15:15 +0000 http:/?p=233301 You want to launch your business in a new market. What Web configuration should you use to maximize SEO benefit? Columnist Aleyda Solis poses five questions to help you determine which is right for you.

The post 5 Questions To Select The Right Structure For Your International Websites appeared first on Search Engine Land.

]]>

The most frequent question I am asked, both online and off, is:

“What’s the best Web structure for my international site?”

This question might seem quite basic and fundamental, but unfortunately, there is no clear-cut “right” answer. Although there are general best practices, the truth is that if you’re in a relatively competitive industry, the selection of the best international Web structure should take more into consideration than what Google suggests.

The structure you choose should be based on your specific needs, your current status and your competition within your targeted markets. It’s important to consider all these factors, as you can end up creating more challenges in your international SEO process or completely risking your international SEO success if your selection is not the best one.

Here are the factors I take into consideration to give the best possible recommendation for every situation, by asking these questions:

1. Are You Country Or Language Targeting?

This should always be the first question to answer in any international SEO process: How will you target (or how are you currently targeting) your international audience? By language? By country?

Many businesses are not completely clear on this, which can cause issues with international SEO implementation. It’s the reason websites end up trying to target France’s audience with a subdirectory in a UK country code top-level domain (ccTLD) or by featuring flags in their language-targeted Web versions (as if they were country targeting).

Although ideally, the best way to target any audience is to do it at the most granular and personalized level (which in this case would be to country target), the truth is that there are some situations where country targeting is not the best option:

  1. When location might not be such a critical factor in your business operation and model. For example, location might be less critical for a technology blog with a business model based in advertising or an SaaS company than for an e-commerce or real estate site. For example, Atlassian and Skype have language, not country, targeting for their international Web versions.
  2. When your audience is distributed across many different countries speaking the same language. Ideally, you would localize for each country — but initially, you might not have a critical mass of potential users and search volume to justify separate website versions for each location. It’s important to develop keyword and audience research and use the data to identify the expected traffic and ROI per country (for which I created a calculator) to assess the viability of enabling a country-targeted version. In some situations, it would make sense to start by targeting language to “test the waters” and verify growth trends in some countries; once a country reaches a certain level of traffic and conversions, you can launch a specific Web presence targeting that country.

Although you could be completely country or language targeting, the reality is that at the beginning, you might have a mixed approach: country target your most important markets, language target others. What’s fundamental, though, is that you know very well how you’re targeting your international markets and why you have chosen the type of target you have selected; this will be the first and the most fundamental criterion to take into consideration when selecting your international Web structure:

For any of these options, it’s a good practice when targeting many countries and languages to use hreflang annotations in order to make sure the right version of a page is shown to the desired international audience in search results. This is especially true when you have many versions in the same language targeting different countries.

Each potential implementation has pros and cons, and you shouldn’t just blindly select an implementation without thinking it through.

For example, although ccTLDs are the “ideal” option to country target, a brand-new domain with no link profile may require much more effort to grow in popularity and thus might not be the best option for a competitive market. In cases like these, it’s necessary to assess more information about the targeted international search markets.

On the other hand, subdirectories for language targeting might also seem the best option. Yet this option may carry more risks if you’re in a sector or market where you’re more liable to be spammed or penalized, as this could end up hurting your whole domain as a consequence.

Country & Language Target International Web Structure

This is why it’s necessary to answer additional questions that will give you insights into the markets you are targeting. Determining how you plan to target (by country, language or some combination thereof) is an essential first step in selecting the best international implementation for your website.

2. How Are You Already Performing In Relevant Markets?

It’s important to understand your starting point in your desired international target markets. Your ideal Web structure could change based on the current rankings, traffic and authority you have with your current site in your targeted markets — especially if you have already established an international Web structure for them.

To identify this, you can answer the following questions:

  1. What’s the current Web structure for your already existing markets? If you’re country targeting at the moment: Are you using a ccTLD for your local market or a .com?
  2. Do you have already any type of international Web presence? If so, what type of Web structure does this international presence have?
  3. What’s the current search visibility, traffic and conversions of your already existing international Web presence (or your current site for the international markets you want to target)? Are you already ranking well for certain keywords? Do you already have an internationally targeted Web structure with non-trivial link popularity?
  4. What’s causing your current Web presence — targeted or not — to perform as it is in your international markets?

For example, if you have a current ccTLD targeted towards your existing local market and nothing else, then the possibility of enabling a subdirectory that leverages your current domain link popularity is eliminated; if you’re targeting another country or a language worldwide, you won’t be able to effectively target and rank with a subdirectory on an irrelevant ccTLD.

On the other hand, if you already have a high-authority gTLD that is ranking for a few queries in your international markets without even being optimized or with subdirectories targeting them, you might be able to leverage it based on the next step’s results.

I’ve also seen a number of situations when there’s already a well-established ccTLD ranking for non-trivial queries in their targeted markets, and the reason for assessing a change is to identify opportunities to simplify the website’s management and operative costs. So, in this case, it is important to identify how visible, established and profitable the current Web structure is and if it will be worth it to migrate it to another one.

3. What’s The Web Structure Of The Top Sites In Your Targeted Markets?

Use SimilarWeb “Industry Analysis” to list the top overall sites of your desired countries’ and sectors’ markets, as well the top ones specifically in search, in order to identify the Web structure of the best-performing and most popular sites. Is having a ccTLD correlated with the best ranking and overall most popular sites? What’s the percentage of the top sites with gTLDs?

Top Competitors in the Industry

Extend this validation by using services like impersonal.me or I Search From to simulate your search from your targeted international markets, or ideally, a proxy to search from a local IP directly, and verify the Web structure of the top-ranking sites for the most popular and relevant queries. This can change a lot even in the same sector, between queries.

For example, you can see the difference between the sites ranking for “car hire Marbella” in the UK, where half of them are ranking are gTLDs and the other half are UK ccTLDs. But, for “car hire South Africa,” only two of the sites ranking have a UK ccTLD, four have a South Africa ccTLD and the other four have a gTLD.

Web Structure UK SERPs

Is it then a must to verify for your most important international queries if it would be viable to rank in top positions with a gTLD or if the already ranking sites are only or mostly local ccTLDs?

4. What’s The Link Popularity Of The Top Sites In Your Targeted Markets?

Besides identifying and analyzing the Web structure of the top-ranking sites targeted for your main queries in your international markets, be sure to investigate their link profiles, as well.

In addition to the overall popularity at a domain and page level, look at what percentage of links are coming from local sites — with local ccTLDs and IPs — versus gTLDs. How natural are these links? You can use Open Site Explorer or Majestic or simplify the analysis with tools like CognitiveSEO.

Car Hire UK SERPs

It’s important to identify the link popularity gap that you would have in case you start with a completely new ccTLD or (in case gTLDs are also ranking for your top queries) by using subdirectories in your existing gTLD. This will help you assess how feasible it is to rank with one configuration versus another (and the level of difficulty you would have in doing so).

If you have seen that most of the ranking sites are ccTLDs with a lower level of authority, then it might be feasible to start directly with a ccTLD with no or very low link profile; however, if the link popularity gap is too big, you might want to consider using a gTLD if these are ranking in top positions in these markets.

5. What’s The Influence Of Having A Localized Domain vs. A Generic One In The User Behavior?

This question isn’t always possible for you to answer, as it requires access to search visibility and traffic data for sites already ranking in your targeted countries and sectors. In the event that you’re not starting from scratch and/or have access to data of sites targeting to the same countries, languages and industries, it’s definitely worthwhile to analyze the influence of cultural factors and preferences of your international audience when it comes to choosing ccTLDs vs. gTLDs. You can even test these by launching pilot projects with only the top site pages.

Keep in mind that a site’s performance can be affected not only by the specific characteristics of your targeted countries but also by the visibility and authority of the brand you’re working with locally; if it’s a branded or non-branded query; if it’s a mobile or desktop result; the number of ads shown; and organization of the SERPs, among other factors.

Thus, it isn’t advisable to simply generalize and expect to draw accurate conclusions from market to market or business to business just with this data. Still, it’s worthwhile to take a look if you have firsthand data, to identify potential patterns.

For this step, you can use the “Search Analytics” report in Google Search Console and use the filter to select specific pages and queries for which these pages rank, to identify their click-through rate (CTR) vs. position difference from country to country. See if there’s a correlation between having a higher CTR with the exact same position, page and query just because of using a local ccTLD vs. a gTLD.

Global English Version in .com

For example, you can see here how the CTR changes when using a .com gTLD across the US, UK and Australia. In Australia, despite having a better position than in the US for the exact same page and query, it doesn’t result in a higher CTR.

If the same happens across many pages and queries, and the search result pages in these countries are similar (in terms of layout and number of ads), then it could point to a potential impact in the user behavior for that specific country. That can end up giving you the last insight you needed to avoid the usage of gTLD in this specific country, especially if you have also seen that most of the top sites in that country are ccTLDs.

Although it would be difficult to draw complete conclusions just by using this data, it can provide additional insight in your analysis to make a final decision.

It’s Time To Answer These Questions And Select The Best International Web Structure For Your Site!

By answering the questions above, you’ll be able to make a much more informed decision when selecting the best international Web structure in your specific situation — one that can effectively perform in your desired markets and that’s flexible to grow and scale over time.

Do you use any additional data or research to make this decision? Feel free to share!


Photo by Alexander Kaiser of pooliestudios.com.

The post 5 Questions To Select The Right Structure For Your International Websites appeared first on Search Engine Land.

]]>
7 Key Questions To Unlock Growth Opportunities In Your SEO Audits /7-key-questions-unlock-growth-opportunities-seo-audits-227428 /7-key-questions-unlock-growth-opportunities-seo-audits-227428#respond Thu, 20 Aug 2015 13:31:24 +0000 http:/?p=227428 When auditing a website's search engine optimization (SEO) efforts, it's easy to find issues that need to be addressed -- but columnist Aleyda Solis reminds us that we can use this data to discover new opportunities, as well.

The post 7 Key Questions To Unlock Growth Opportunities In Your SEO Audits appeared first on Search Engine Land.

]]>

SEO audits are generally conducted to evaluate the status of, and identify any issues with, a website’s technical setup, content, linking and overall organic search performance.

However, an audit should do more than this. It should enable you to discover growth opportunities — and it should help you to prioritize your SEO actions based on how critical they are and the level of impact they will have.

Sometimes, huge opportunities are not identified — and thus not recommended to be prioritized in the SEO process — because key questions about the elements assessed and/or data gathered during the audit haven’t been asked.

Following are some of the key questions that I’m currently asking when doing SEO audits, which allow me to prioritize and focus on the changes with the highest growth potential and fix some otherwise overlooked issues:

1. Which Pages That Are Not Meant To Be Indexed Or Ranked Are Wasting Your Site Crawl Budget?

Wasted Crawl Budget - SEO Audit

Let’s start with a fundamental issue that I still see every single time I do an audit: Wasting a site crawl budget. This is usually the result of keeping links to error pages, redirected pages, canonicalized pages or non-indexable pages active.

This issue is especially common on sites that have gone through a migration or redesign. Even if former URLs were redirected, sometimes internal links pointing to the old URLs are never updated, causing many internal redirects; in other cases, the initially implemented redirects are eliminated, causing the crawling of now non-existent pages.

In other situations, wasted crawl budget occurs as a results of massive use of canonical tags and meta robots noindex tags to treat content duplication issues (especially with some CMSs automatically including those).

The canonical and noindex tags are generally effective in fixing content duplication issues; nonetheless, if these pages are still internally linked, they will end up sometimes eating your crawl budget, as in the example below.

Don't waste your site crawling budget

The goal should be to make the most out of the search crawler effort in your site, prioritizing the discovery of those pages which are important and meant to be ranked. You can use SEO crawlers like Screaming Frog, OnPage.org or Deepcrawl to easily answer this question.

2. Which Of The Highest Visibility Pages Are Canonicalized Or Redirected?

Canonicalized or Redirected Top Pages

Another common issue I find is that relatively well-ranked pages are now being 301-redirected or canonicalized toward other pages; the rankings are not always completely lost, but their performance is not as good as it could be if they were the canonical version.

If the now-canonical page is the one that is supposed to rank, then it’s important to improve the signals toward it and eliminate any links going to the old non-canonical ranked pages.

To identify this issue, you can go to the Google Search Consoles and select the “pages” filter from the Search Analytics report. This will provide you with a list of the pages with the highest organic search visibility, which you can export as CSV, then import into Screaming Frog and crawl using the “Upload List” mode.

This will give you the http status codes of these pages and show you if they are canonicalized.

3. Which Of Your Best-Ranked Pages & Query Combinations Have A Low CTR Or High Bounce Rate?

URLProfiler Integration with Google Analytics and Search Console

It’s a must to review your top-ranked pages and identify any performance issues that end up hurting their organic search conversions, such as a low click-through rate (CTR) from search results or a high bounce rate.

Low CTR can be common when the pages ranking for some queries are not the ones intended (due to content cannibalization problems) or when there’s a non-optimized or incorrect title or meta description shown, providing a poor user experience in the search results pages.

High bounce rates can occur when there are page speed problems, content issues (not enough relevant information or a mismatch from user query) or a lack of mobile optimization. These issues also result in a bad user experience, causing visitors to leave the page instead of continuing with their customer journey.

In order to detect these problems, it’s necessary to gather and analyze the following data for the top ranking pages:

  • Average position and CTR for related search queries, from Google Search Console
  • Organic search traffic and bounce rates, from Google Analytics
  • Titles and meta descriptions, from Screaming Frog
  • Speed and mobile validation, from Google API by using URL Profiler

You should first export the top visibility pages from the Google Search Console Search Analytics report to crawl in Screaming Frog. From there, use URL Profiler (you can directly import the Screaming Frog file) to integrate the required Google Search Console and Analytics data, as well as Google’s page speed and mobile validation through their API.

CTR & Bounce Rate - URLProfiler

At the end, you’ll have an Excel sheet with all of the necessary data in one place (as shown in the image above), which will facilitate the required analysis to answer this question.

4. Which Of Your Most Visited & Best Converting Pages Have Speed Performance Issues?

High PageViews Pages with Page Speed Issues - GA

I recommend digging deeper with your page speed analysis. In addition to looking at potential issues with the best ranking pages, it’s important to find out if any of your most visited and best converting pages suffer from speed problems, as well.

Google Analytics has Site Speed reports where you can directly check which pages have the highest page views and higher-than-average load time. Among these reports is a “speed suggestions” report that integrates with PageSpeed Insights, giving you the average load time for your top pages, along with a link to the PageSpeed suggestions to get recommendations for improvement.

Yandex Metrica also offers a page load time report that includes a wealth of page-speed-related information, including DOM load time, server response, HTML load and parsing time, time to establish connection and more. These data can help you to identify potential site speed issues.

Mobile & Desktop Speed of Highest Visited Pages

Of course, you can further the analysis by exporting data from Google Analytics about your most visited and best converting pages, crawling them with Screaming Frog, then again using URL Profiler to import them and validate their mobile and desktop speed.

5. Which High-Traffic Queries Are Your Competitors Ranking For In Mobile Search That You Aren’t?

Mobile Keywords Competitors
SEO audits often include some form of competitive analysis; however, this is often limited to desktop search only, despite the fact that Google surfaces different results on mobile.

Due to the increasing importance of mobile search (and especially in light of Google’s recent mobile friendly algorithm update, aka “Mobilegeddon”), it’s time to take this competitive analysis to the next step by making sure you’re maximizing your mobile presence (both Web and app).

You can start by identifying which queries your competitors’ mobile sites and apps are already ranking for and profiting from that you’re still not targeting.

This can be done using SEMrush, which provides mobile search data for the US, or Sistrix, which provides this data — along with the level of competition, potential traffic and trends — for the US, UK, Germany, France, Spain, Italy, the Netherlands and more. Sistrix also shows which pages specifically are ranking in mobile search results, and at which positions.

A similar analysis can be done to identify the potentially missed opportunities with mobile app search (especially when implementing mobile app indexing) using Similar Web app analysis to check the percentage of traffic brought from search engines to your top competitors’ apps, as well as the keywords bringing it. You can compare this data with yours to identify the existing gap and mobile search growth opportunities.

App Search Traffic Opportunity

6. Which Of Your Organic Landing Pages Are Not Internally Linked Or Only Found In XML Sitemaps?

Sometimes your most important, meant-to-be-ranked pages are not being prioritized in your internal linking structure due to website architecture issues (especially common after a redesign or site migration processes). In the most extreme cases, these pages wind up as “orphan pages” that can’t be found through normal Web crawling.

Universal Crawl: GA & Sitemaps

Some SEO crawler tools, such as DeepCrawl or OnPage.org, allow you to integrate your XML sitemaps and Google Analytics landing pages; they will crawl the pages found there and inform you if they are included only in these places that haven’t been found in the Web crawl. The screenshot below shows OnPage.org’s orphaned pages report.

Orphan Pages

7. Which Verticals Can Provide More Search Visibility Opportunities Through Universal Search Results?

It’s also common in SEO audits to check if you’re ranking with the right pages and formats. You can identify potential opportunities to grow your organic search traffic by creating and optimizing content in different formats for Google’s universal search results, depending on your vertical; for example, localized or geotargeted sectors may wish to pursue maps results, and e-commerce sites may wish to pursue visibility within images results.

By identifying these opportunities, you’ll be able to prioritize the development of multi-format content to maximize its impact.

You can start by checking with Similar Web’s Industry Analysis, selecting your desired industry and country, then verifying in the “Search” report the percentage of traffic sent by each channel, as well as the keywords sending more traffic per channels.

Universal Search Industry Opportunity

You can additionally identify and complete these keywords by using Sistrix’s “Universal Search” module to check your industry top players’ and competitors’ universal search visibility profiles.

This view will provide information on search traffic sent by each vertical over time, as well as the keywords, position and pages they’re ranking with.

Universal Search Rankings & Opportunities

I hope that these questions help you to identify even more opportunities for your organic search performance by providing a more strategic approach to your SEO audit.

Which questions have you found are key when doing your SEO audits? Please feel free to share in the comments.

The post 7 Key Questions To Unlock Growth Opportunities In Your SEO Audits appeared first on Search Engine Land.

]]>
/7-key-questions-unlock-growth-opportunities-seo-audits-227428/feed 0
7 Key SEO Activities That Can Now Be Automated /7-key-seo-activities-can-now-automated-223471 /7-key-seo-activities-can-now-automated-223471#respond Thu, 25 Jun 2015 13:51:02 +0000 http:/?p=223471 Columnist Aleyda Solis shares some of her favorite tools for automating various aspects of search engine optimization.

The post 7 Key SEO Activities That Can Now Be Automated appeared first on Search Engine Land.

]]>

Although it’s hard to keep up with the growing number of SEO tools that have been launched in the last few years (along with the new functionalities of the existing tools), it’s necessary to test them in order to identify how their features can support and help advance our SEO activities more efficiently.

This is especially true when it comes to tasks that are critical or highly beneficial to the SEO process but are complex and/or time-consuming to execute.

That’s why I want to share with you seven such SEO tasks that now can be partially or completely automated with the support of some tools.

1. Assessing Your Industry Traffic Potential

One of the first activities when launching a new website or SEO campaign is to assess traffic potential (ideally per channel) and identify the potential competitors in the market. Estimating this can be challenging, especially when starting to work in a new market that you don’t know anything about.

Nonetheless, SimilarWeb “Industry Analysis” reports can greatly help by allowing you to easily obtain the most important traffic data for any industry in many countries; it also shows the traffic per sources, most popular sites per channel, and trends.

However, remember to take these numbers as references, not absolutes; and whenever you can, validate with other data sources.

Search Industry Analysis

2. Identifying Keyword Opportunities For Your Sites

Finding new keywords opportunities is important when focusing your SEO process and establishing profitable yet still feasible goals.

In the past, doing this type of analysis was time-consuming, but now it can be completely automated with Sistrix‘s “Opportunities” feature. With this feature, you can include up to three competitors, and it will show which keywords you’re still not targeting for which these competitors are already ranking and the level of traffic opportunity and competition.

Sistrix SEO Opportunities

3. Identifying Related Relevant Terms To Use In Your Content By Doing A TF-IDF Analysis Of The Top Ranked Pages For Any Query

TF-IDF stands for “term frequency” and “inverse document frequency.” According to the OnPageWiki:

With the TF*IDF formula, you can identify in which proportion certain words within a text document or website are weighted compared to all potentially possible documents. Apart from the keyword density, this formula can be used for OnPage optimisation in order to increase a website’s relevance in search engines.

Although it’s known that TF-IDF has been used to index pages, there hasn’t been a popular tool offering it to identify relevant term variances of our topics that we should be using. This information can be used to improve our site relevance for other terms our audience uses.

OnPage.org includes a handy TF-IDF tool in their on-page analysis and monitoring platform, which can be used to identify more term variances or combinations that our competitors are already using, but we still aren’t (by analyzing both the top 15 page results and our own desired page to rank with). By focusing on terms related to our main keywords, we can increase our site content’s relevance for the desired topic.

tf-idf analysis

4. Visualizing Your Site’s Internal Linking

I have written in the past about visualizing a site’s pages and links as a graph to facilitate the analysis of a website’s internal linking, which was doable but took a lot of effort. The process required exporting the URLs crawled, then processing them with visualization tools.

This has now been made easy by the “Visualizer” functionality of OnPage.org. It not only allows you to automatically generate the internal link graph of any site, but it provides functionalities to browse, filter the number, show the relationship of links, and show only the nodes (or pages) that follow certain pattern.

This can be extremely helpful to better understand how a site is internally linked, the cardinality of the links, if there are any “orphan pages” or areas of the sites that are not connected with the rest, etc.

internal link graph

5. Getting All Key Optimization, Link Popularity, Social & Organic Traffic Data For Your Top Site Pages In A Single Place

Gathering the data when doing an SEO audit can be time-consuming. This data includes a website’s technical optimization, content, link popularity, current organic search traffic, and search engine rankings, which we used to obtain from different, non-connected data sources that were a challenge to integrate later.

This data gathering can now be largely automated thanks to URLProfiler, which directly retrieves much of the required data while combining many other tools’ data. For example, in order to get all the key SEO metrics for the highest visibility pages of your site, you can download the “top pages” CSV from the “Search Console” Search Analytics report, import them to Screaming Frog SEO crawler in the “list mode,” and crawl them.

Once crawled, you can import them directly to URLProfiler with the “Import from Screaming Frog SEO Spider” option. Then, you should select the additional metrics you want to obtain for these pages: Mozscape link popularity and social shares metrics, Google Analytics organic search traffic data (you’ll be able to select the segment you want), and Google PageSpeed and Mobile validation (these will require that you get and add a free API key from Moz and Google).

URL Profiler

Now, you can run URLProfiler and get the results in a few minutes in one spreadsheet: All the data from Screaming Frog, Google Analytics, MozScape link and social shares, Google PageSpeed and mobile validation for your top pages with the highest visibility in Google’s Search Console. It will look like this (and I can’t imagine the time I would have needed to put this all together manually):

URL-profiler

There’s no excuse to not develop a quick SEO audit for your most important pages, taking all the key metrics into consideration.

6. Getting Relevant Link Prospects With The Desired Requirements And Direct Contact Information

Obtaining a list of sites that are highly relevant to your business might be not that difficult — doing so when looking only for highly authoritative sites, from a specific country, with visible contact information (among other criteria) is a bit more complex.

All this can be easily done now with the LinkRisk Peek tool, which provides many advanced filters to only get the sites that will be relevant and feasible to use for outreach.

LinkRisk Peek

7. Tracking Daily Rankings Of Full SERPs For Your Relevant Keywords

There was a time when we tracked the rankings for our most important keywords, for both our own sites and our top competitors. Due to ongoing ranking fluctuations, sometimes we have new competitors that we were not tracking, and it is hard then to identify the correlations of gains and losses vs. them.

Additionally, once we got the ranking information, we had to analyze the pages to identify the potential reasons for the ranking shifts. We did this using tools to obtain the domain/page link popularity, among other factors.

This is now easier to do with tools like SERPWoo. Rather than tracking specified URLs (yours and your competitors’), SERPWoo tracks the top 20 results for your keywords by default. It also includes useful metrics such as page and domain link popularity, social shares, etc., to help marketers more easily analyze the potential causes of a rankings fluctuation.

SERP Tracking

I hope that these functionalities help you as much as they have helped me! Which other SEO activities are you now automating that used to take you a lot of time? Please, feel free to share in the comments!


Image used under Creative Commons from Flickr

The post 7 Key SEO Activities That Can Now Be Automated appeared first on Search Engine Land.

]]>
/7-key-seo-activities-can-now-automated-223471/feed 0