White Label SEO
seo in dubai Frederick Vallaeys – Search Engine Land News On Search Engines, Search Engine Optimization (SEO) & Search Engine Marketing (SEM) Tue, 05 May 2020 16:16:17 +0000 en-US hourly 1 A script that detects anomalies at a local level /a-script-that-detects-anomalies-at-a-local-level-334186 Tue, 05 May 2020 14:19:00 +0000 /?p=334186 This script alerts advertisers when regional differences related to the coronavirus have an impact on your PPC campaigns.

The post A script that detects anomalies at a local level appeared first on Search Engine Land.

Scripts are awesome for many reasons but the business conditions brought on by coronavirus have highlighted one of their most powerful benefits: they can be created cheaply and quickly. So much so that they can be deployed in quick response to unexpected events, even when there’s no expectation they will be useful for a long time. And when it comes to COVID scripts, we all hope those will be irrelevant sooner than later. But until then, they can at least help us get through stressful times with one less thing to worry about.

So far I’ve shared a script that annotates PPC performance with data about government responses to the virus. Martin Rottgerding shared his script that plots pre and post-covid performance of accounts for an entire MCC. Today, I’d like to share another script, one that alerts advertisers when there is an anomaly in performance in a specific location: the Geo Anomaly Detector.

Alerts are critical, especially now

Setting up helpful alerts should always be part of a PPC manager’s job. There’s simply too much data for humans to monitor manually. Normally alerts deal with fairly mundane PPC stuff like making sure monthly budgets aren’t exceeded, accounts don’t stop serving when a credit card expires, etc.

One alert many of us haven’t had to consider before, mostly because it wasn’t usually relevant, was a geo anomaly detector. Most advertisers already pay close attention to geographic targeting, geo bid adjustments, and performance at a regional level. But this is usually done under the assumption that each geographic region will exhibit reasonably consistent behavior over time. For example, if a car dealer tends to get more clicks in San Francisco than in other cities, that would normally be the case all the time.

But now with coronavirus, there are two unexpected new factors in play that impact geographic performance:

  1. How local governments restrict what businesses can do
  2. How consumer behavior changes in response to the virus

Whereas before one could safely assume that things wouldn’t change much in the various targeted geographies, now that’s changed and key performance indicators of search campaigns can have wild swings from one day to the next.

Finding out when KPIs behave in unexpected ways in a particular location was the premise for the Geo Anomaly Detector script I am sharing here today.

Geo anomaly detection example

I spoke about this idea conceptually during a recent PPC Town Hall. I had recently visited a Best Buy in California for a drive-up order and when I mentioned it to my dad who lives in Massachusetts, he told me his store was open like normal. 

That made me realize national advertisers with physical locations now had to account for an overwhelming number of different possible scenarios in all the locations where they are present. 

Whereas in California I had to order online and queue in the parking lot for my pickup, my dad was still allowed to walk into his local store in another state. But being older, he was not comfortable doing so. These two new variables could have a big impact on PPC performance and for advertisers it might be hard to stay on top of what conditions exist in all the locations they operate. 

But it wasn’t until Jordan Pierson from Supreme Golf emailed me with an idea for how to treat this scenario with a script, that the Geo Anomaly Detector came into existence. Jordan noticed that golf tee time bookings were impacted by differing local regulations as well as people’s behavior in response to COVID-19. Monitoring all their locations for sudden changes in performance was taking a lot of manual time.

And right now professionals simply don’t have that extra time, given some have to teach their kids at home or are picking up a bigger workload due to the virus.

What the script does

The Geo Anomaly Detector script uses several weeks of historical data to calculate the average performance for each combination of day of week and location where ads serve. It then compares this average (the ‘normal’) to yesterday’s performance. When the difference exceeds a user-defined threshold, it sends an email alert.

The script sends an email with details for any locations where yesterday’s performance is unusual compared to the typical performance for the same day of the week in that location.

The script also generates a Google Sheet with complete geo data so advertisers can browse it, even when there are no alerts.

The script generates a Google Sheet with columns of the typical performance and yesterday’s performance for each location where the ads are being shown.

Script settings

There are some basic settings first:

  • LAST_N_WEEKS tells the script how many weeks of historical performance to consider
  • EMAILS is a comma separated list of email addresses to notify when the script is finished running
  • MIN_CLICKS_PER_DAY sets a threshold for which locations to analyze so that those with too few daily clicks won’t be analyzed. This helps keep the number of alerts under control for big accounts

Next, there is a more complicated setting to indicate your thresholds for when to get alerts:

This is an object with an array of settings. The easiest way to use it is to just change the numbers and metric names rather than write the object from scratch.

  • INC_PCT refers to the percentage change. Any bigger percentage than this is eligible to trigger an alert
  • INC_ABS is the absolute change in the metric. Only if both the percentage and absolute change are exceeded will the script trigger an anomaly warning.

I recommend you run this script once per day, around 3 a.m. It’s not necessary to run it every hour as it only uses data through yesterday.

Install this script code in an individual Google Ads account:


I hope this script helps you more quickly find out when regional differences related to the coronavirus have an impact on your PPC campaigns. By finding out about these anomalies sooner, you’ll be able to more quickly take action and prevent wasted spend and capture more conversions.

The post A script that detects anomalies at a local level appeared first on Search Engine Land.

Track how PPC performance changed when specific COVID-19 events took place /track-how-ppc-performance-changed-when-specific-covid-19-events-took-place-332631 Fri, 10 Apr 2020 12:15:00 +0000 /?p=332631 This Google Ads script overlays government actions related to the pandemic on Google Ads performance metrics.

The post Track how PPC performance changed when specific COVID-19 events took place appeared first on Search Engine Land.

We’re in uncharted territory with COVID-19 and its impact on lives, the economy and our PPC campaigns. I don’t have a medical degree, I’m not a scientist, a first responder, or someone whose job takes them to the front-lines of the situation so I am grateful for everything they are doing. 

With my skills as a PPC expert, the best I can offer is to help people get clarity so they can make the best possible decisions in uncertain situations.

At Google, one of the core tenets of innovation was to share all information. The idea is simple: if you give smart people all information relevant to a situation, they will generally all work towards the same solution to solve the problem.

I’d like to do my part in helping advertisers get all the information in the form of a Google Ads script that overlays government actions related to the pandemic on Google Ads performance metrics.

The idea is to help you see if certain events, like store closures, the start of shelter in place, the closing of schools, or the introduction of social distancing correlates in any way with drop-offs or spikes in performance. 

The script generates a spreadsheet of Google Ads performance data with markers for important COVID-19 events, like school closings, shelter-in-place orders, etc. 

The time frame for which we have data for government action may be too short to see many meaningful patterns but I think the script has value because your boss or your client may ask you how COVID-19 is impacting their ads and this script can save you a couple of hours doing the work of manually compiling that data.

We’ve also heard feedback from our users at Optmyzr that this visualization has helped them because they serve customers in many locations and it’s challenging to stay on top of what is happening in all the different places where your customers may be advertising.

For this script, we are using a dataset from Kaggle with global government actions. It may not be complete or always up-to-date but it seemed like the best source for the data.

Installing the script is very simple. Just add it as a new script in your Google Ads account, add your email address to get notified when a new report is done, and schedule it to run once per day. 

Install the script in Google Ads by overwriting all the sample code of a new script. Then preview the script to get the URL of the report. Screenshot from

The script grabs geo-segmented account data for the past 30 days and adds it to a Google Sheet. 

The spreadsheet is filled with raw geo performance data from Google Ads and any associated government actions on that date in that region.

Then you can filter that data for the geo-location you want to see and the included chart will be updated automatically with just the metrics for that region and overlaid with just the government actions in that same region.

To get a better chart, filter the data for the regions you care most about.

For the US, the data is most relevant at the state level. Internationally, the data is best at the country level.

Here’s the script:

The post Track how PPC performance changed when specific COVID-19 events took place appeared first on Search Engine Land.

Will RSAs help or hurt your account? This script will help you figure it out /will-rsas-help-or-hurt-your-account-this-script-will-help-you-figure-it-out-330199 Thu, 05 Mar 2020 20:42:23 +0000 /?p=330199 The best way to determine the effect of an automation like RSAs is to test them with a Campaign Experiment, a feature available in both Google and Microsoft ad platforms.

The post Will RSAs help or hurt your account? This script will help you figure it out appeared first on Search Engine Land.

Have you heard conflicting stories about the usefulness of PPC automation tools? You’re not alone! On one side you have Google telling you that automations like responsive search ads (RSAs) and Smart Bidding will help you get better results and should be turned on without delay. On the other side you get expert practitioners saying RSAs are bad for conversion rates and Smart Bidding delivers mixed results and should be approached with caution. So how do you decide when PPC automation is right for you?

I’d love to give you the definitive answer but the reality is that it depends because Google and practitioners are both right! Neither would win any long-term fans by lying about results so the argument from both sides is based on the performance from different accounts with different levels of optimization.

In this post, I’ll tackle one way to measure if RSAs help or hurt your account. I won’t say if RSAs are good or bad because the answer depends on your implementation and my goal is to give you a better way to come to your own conclusion about how to get the most out of this automated PPC capability.

To optimize automated features, we need to understand how to better analyze their performance so that we can fix whatever may be causing them to underperform in certain scenarios. In our effort to make the most out of RSAs, we’re going to have to play the role of PPC doctor, one of the three roles humans will increasingly play in an automated PPC world.

To make this analysis as easy as possible, I’ll share an automation layering technique you can use in your own account right away. The script at the end of this post will help you automatically monitor RSA performance down to the query level and give ideas for how to optimize your account.

The right way to test RSAs is with Campaign Experiments

The best way to come to your own conclusions about the effect of an automation like RSAs is to test them with a Campaign Experiment, a feature available in both Google and Microsoft ad platforms.

With an experiment, you can run two variations of a campaign; the control, with only expanded text ads and the experiment, with responsive search ads added in as well.

When the experiment concludes, you’ll see whether adding RSAs had a positive impact or not. When measuring the results, remember to focus on key business metrics like overall conversions and profitability. Metrics like CTR are much less useful to focus on and Google is sometimes at fault for touting an automation’s benefits in terms of this metric that really matters only in a PPC world, but not so much in a corporate board room

As an aside, if you need a quicker way to monitor experiments, take a look at a recent script I shared that puts all your experiments from your entire MCC on a single Google spreadsheet where you can quickly see metrics, and know when one of your experiments has produced a statistically valid answer.

There is however a problem with this type of RSA experiment… it’ll only tell you a campaign-level result. If the campaign with RSAs produced more conversions than the campaign without, you will continue with RSAs but may miss the fact that in some ad groups, RSAs were actually detrimental.

Or if the experiment with RSAs loses, you may decide they are bad and stop using them, when they possibly drove some big gains in a limited set of cases. We could look deeper into the data and discover some nuggets of gold that would help us optimize our accounts further, even if the answer isn’t to deploy RSAs everywhere.

It’s the query stupid

As much time as we spend measuring and reporting results at aggregate levels, when it comes time to optimize an account, we have to go granular. After all, when you find a campaign that underperforms, fixing it requires going deeper into the settings, the messaging (ads) or the targeting (keywords, audiences, placements).

But ever since the introduction of close variants made exact match keywords no longer exact, you should go one level deeper and analyze queries. For example, when you see a campaign’s performance gets worse with RSAs, is that because RSAs are worse than ETAs? Or could it be that the addition of RSAs changed the query mix and that is the real reason for the change in performance?

Here’s the thing that makes PPC so challenging (but also kind of fun). When you change anything, you’re probably changing the auctions (and queries) in which your ads participate. A change in the auctions in which your ad participates is also referred to as a query mix change. When you analyze the performance at an aggregate level you may be forgetting about the query mix and, you may not necessarily be doing an apples-to-apples comparison.

The query mix changes in three ways: 

  1. Old queries that are still triggering your ads now
  2. New queries that previously didn’t trigger your ads
  3. Old queries that stopped triggering your ads

Only the first bucket is close to an apples-to-apples comparison. With the second bucket, you’ve introduced oranges to the analysis. And the third bucket represents apples (good, bad, or both) you threw away.

Query mix analysis explains why results changed

The analysis at a query level is helpful because it can more clearly explain the ‘why’ rather than the ‘what’. Why did performance change? Not just what changed? Once you understand ‘why’, you can take corrective action, like by adding negative keywords if new queries are delivering poor performance.

For an RSA query analysis, what you want to see is a query level report with performance metrics for RSAs and ETAs. Then you can see if a query is new for the account. New queries may perform differently than old queries but they should be analyzed independently. The idea is that an expensive new query that didn’t trigger ads before may still be worth keeping because it brings new conversions we otherwise would have missed.

With the analysis that the script below does, you will also see which queries are suffering from a poorly written RSA and that are losing conversions as a result. Many ad groups have too little data for Google to show the RSA strength indicator so having a different way to analyze performance with this script can prove helpful.

Without an automation, this analysis is difficult and time consuming and probably won’t be done on a routine basis. Google’s own interface is simply not built for it. The script automates combining a query report with an ad report and calculates how much impact RSAs had. I wrote about this methodology before. But now I’m sharing a script so you can add this methodology to your automation toolkit.

ETA vs RSA Query Analysis Script

The script will output a Google Sheet like this:

Caption: The Google Ads script produces a new spreadsheet with detailed data about how each query performs with the different ad formats in each ad group.

Each search term for every ad group is on a separate row. For each row, we summed the performance for all ETAs and RSAs for that query in that ad group. We then show the ‘incrementality’ of RSAs in red (worse) or green (better).

When the report is finished, you’ll get an email with a link to the Google Sheet and a summary of how RSAs are helping or hurting your account.

The recommendation is one of four things:

  1. If the ad group has no RSAs, it recommends testing RSAs
  2. If the ad group has no ETAs, it recommends testing ETAs
  3. If the RSA is converting worse, it suggests moving the query into a SKAG with the existing ETA and testing some new RSA variations
  4. If the RSA is converting better, it suggests moving the query into a SKAG with the existing RSA and testing some new ETA variations

You don’t have to follow this exact suggestion. It’s more of a way to get an idea of the four possible situations a query could be in.

My hope is that this proves to be an interesting new report that helps you understand ad type performance at a deeper level and gives you a jumping off point for a new type of optimization.

To try the script (code at end of article), simply copy and paste the full code into a single Google Ads account (it won’t work in an MCC account) and review the four simple settings for date range, email addresses, and campaign inclusions and exclusions.


This script’s purpose is to populate a spreadsheet with all the data. It doesn’t filter for items with enough data to make smart decisions. How you filter things is entirely up to you. For example, I would not base a decision about removing an RSA on a query with just 5 impressions. You can add your own filters to the overall data set to help you narrow things down to the highest priority fixes for your own account.

I could have added these filtering capabilities in the script code but I felt that most advertisers are more comfortable tweaking settings in spreadsheets than in JavaScript. So you have all the data, how you filter it is up to you. :)

Methodology for calculating incrementality

The script itself is pretty straightforward but you may be curious about how we calculate the ‘incrementality’ of RSAs. Here’s what we do if a query gets impressions with both ad formats.

We assume that additional impressions with a particular ad format will deliver performance at the same level of existing impressions with that ad format.

We calculate the difference in conversions per impression, CTR and CPC between RSAs and ETAs for every row on the sheet.

We apply the difference in each of the above 3 metrics to the impressions for the query.

Caption: Each row of the spreadsheet lists how much RSA ads helped or hurt the ads shown for each query in the ad group. Based on this advertisers can make decisions to restructure accounts to get the right query with the right ad in the right place.

That allows us to see how much clicks, conversions and cost would have changed had we not had the other ad format.


Advertisers should not make decisions with incomplete data. Automation is here to stay so we need to figure out how to make the most of them and that means we need tools to answer important questions like ‘how do we make RSAs work for our account?’ Scripts are a great option for automating complex reports you want to use frequently so I hope this new one helps. Tweet me with ideas for new scripts or ideas for how to make this one better.

The post Will RSAs help or hurt your account? This script will help you figure it out appeared first on Search Engine Land.

Automation layering is driving PPC so get onboard in 2020 /automation-layering-is-driving-ppc-so-get-onboard-in-2020-327123 Fri, 03 Jan 2020 15:05:00 +0000 /?p=327123 This is the year to let the search engines perform the complicated tasks so PPC practitioners move back to the pilot’s seat to drive better results.

The post Automation layering is driving PPC so get onboard in 2020 appeared first on Search Engine Land.


Contributor and SMX speaker, Frederick Vallaeys, wants PPC practitioners in the pilot’s seat in 2020 and knows automation layering is the tool that will help drive better results.

Below is the video transcript:

Hey, I’m Fred Vallaeys, I’m the CEO and cofounder at Optmyzr.

So what’s going to happen in 2020? Well, I think we’re going to continue to see a lot of the automation taking over some of the things humans have been doing in PPC.

But I’ve been thinking about this a lot, and I think the way we as PPC practitioners can really earn our keep is to start with automation layering.

So let Google, let Microsoft figure out all the complicated, advanced artificial intelligence, machine learning and how to show our ads to a better audience and how to set the right bids. But we have to control it. We have to monitor it, right?

If you think of yourself as the PPC pilot, the one who’s in charge of that plane and steering it and making sure the right date is going in, the right decisions are being made. It’s fundamentally a question of automation layering.

So how do you build a little script? A little automation that keeps track of what Google is doing. And if it sees that it’s kind of going far afield and maybe there’s a close variant match that you don’t think it’s a great one. How do you bring that back in? Right.

You can write a simple automation that alerts you to this so that it can go make a manual change. Where you could do something more sophisticated, right? A Google Ads script that automatically figures out when this is happening and then puts in negative keywords.

So this whole concept of automation layering and basically saying like, listen, how do we want to control in a world of more automated PPC? And how do we want to control these accounts and what automation can be put in place even if they’re not super sophisticated to just put us back in charge of PPC and drive even better results. So that’s what I think about 2020.

This is part of a special feature from our community of experts on what successful marketers will do in 2020. Read more >>

The post Automation layering is driving PPC so get onboard in 2020 appeared first on Search Engine Land.

3 pitfalls of PPC experiments /3-pitfalls-of-ppc-experiments-326332 Wed, 11 Dec 2019 21:20:20 +0000 /?p=326332 Knowing what to test and how to interpret the results based on nuances and oddities of experiments is an important skill for people, not automations.

The post 3 pitfalls of PPC experiments appeared first on Search Engine Land.

While there are many true-and-tried best practices in search marketing, the devil is in the details when it comes to achieving the best results. For example, it’s hard to argue with the merits of automated bidding but it’s not that hard to get bad results if you deploy it incorrectly

Say you read that Hertz used smart bidding to reduce their CPA by 35% so you decide to deploy the same strategy in your account. If it were that simple to run a successful Google Ads account, we’d all be out of jobs. Simply knowing what feature to use isn’t enough as you also need to know the right settings that will make it work as well for you as it did for the advertiser in the case study.

And to be the best search marketers we can be, we can’t simply look at what other advertisers did. Instead, we can take hints from others and use it as the basis for honing in on what works for us. We have to discover the details of the right way ourselves.

And that’s why being really good at PPC experimentation is so important. I spoke on this topic at SMX East in the session “Awesome Tests, Profitable Results,” and here are some of the key takeaways.

The three most popular PPC testing methodologies

One of the key claims to fame of search marketing is that it’s more measurable. So whenever we try something new, we better have some numbers to back up our findings so we need to run experiments in a structured manner.

There are three ways we usually see this done.

Before-and-after tests

The simplest way to start a test is to make a change in a live campaign and then compare the results from before and after the change was implemented. The beauty of this method is that you can test anything in your ads account quickly. The downside is that while setup is super quick, measurement takes more effort and you can’t have an apples-to-apples comparison because results may be impacted by external factors that change during the before- and after periods.

How much data to compare

When measuring results, allow enough time to minimize time-based factors. And while I’d love to tell you exactly how much time that is, remember the point that things differ for every advertiser, industry, etc.

But if you want some guidance, at the very least measure one full week before and after to remove the impact of weekday versus weekend performance.

If you are in a vertical where not just the day of the week but also the time of the month plays a role, e.g., in automotive, measure a full month before and after the change. In automotive, the time of the month may impact how aggressive dealers are on price as they try to hit monthly targets and consumers’ willingness to buy fluctuates along with the dates when they get their paychecks.

Lookback windows for bid management changes

Specific to bid management, if you’re using the before-and-after technique to measure impact, remember that your lookback window should be the same as the frequency of your changes. For example, if you make bid changes every day, you can’t look at the last 30 days of performance data because that might include data from 30 different bid levels, one for each day of the lookback period.

Ads experiments

So clearly, a before-and-after testing methodology comes with some serious challenges and that’s why both Microsoft and Google have added features to run better tests in PPC accounts. While it takes a bit more time to set up the experimental campaign with all the changes to be tested, it has the benefit of removing any potential skew in results that’s common in before-and-after tests. It does this by letting the advertiser do a split test, for example, a 50-50 split where half the users are served the control and the other half the experiment.

And not only are the results more reliable, whatever time is invested to set up the experiment is easily recouped because reporting of results is baked into the product.

Market split tests

Larger advertisers, or those trying to measure the impact of offline changes on online performance, may consider doing a split test where the split isn’t based on individual users (or cookies) but rather the split is between entire geographic areas. For example, an advertiser could run TV ads in San Francisco and not in Seattle and use Seattle as the control and SF as the experiment. This is much more complicated so I won’t go into it here.

Why a control may be impacted by the experiment

As much as we may try, there is no way to run an experiment and know with absolute certainty that the result is due to what we did and not due to another factor beyond our immediate control. Even when doing a split test where we assume to have a solid control, its results may be impacted by second-order effects. 

The experience of one of our clients illustrates this really well. They ran an experiment to test if Target CPA (tCPA) bidding would improve the performance compared to bidding manually with the help of scripts and rules. To their surprise, the average CPC in the control quickly went way up. What happened?

Smart bidding started testing more aggressive bids and astute competitors with timely alerts noticed this and increased their own bids. These higher bids started to impact the control where average CPCs went up and impression share was lost when the bids couldn’t keep up with the competition. So the lousy results in the control were attributable to the changes the experiment set in motion. Talk about making it hard to really understand what’s going on!

The point is that PPC is a dynamic space and any change that’s made can have far-reaching effects. Looking with too narrow a lens can make us miss the real reason why something changed.

Why the experiment may not be the real reason for the change

In another example, we found an advertiser testing tCPA smart bidding but not seeing the results they were hoping for. After investigating, it became clear that the problem wasn’t with bidding, but with conversion tracking.

When the advertiser was bidding manually, they were using their human common sense to keep bids high for relevant upper-funnel keywords even though they weren’t seeing many direct conversions from these.

But when they switched to automated bidding, it started reducing bids for these ‘non-converting’ keywords because it had no way to understand their importance at the earlier stages of the consumer journey.

The point is that you can’t always test things in isolation. Before you test smart bidding, you should have your measurement set up correctly.

Why the experiment results obscure opportunities

Once you discover a winning experiment, you may think you’re done, but there’s always more to optimize. Remember that Google and Microsoft just report on whether the control or the experiment won. But if the experiment wins, it doesn’t mean that everything about it is better than the control. It means that in aggregate, the combination of things in the experiment is better than the combination of things in the control.

Going deeper to identify these combinations can take a lot of time and that’s why winning PPC specialists should consider having a few tricks up their sleeve to go beyond what the average advertiser will pay attention to.

Most advertisers will just launch the winner and move on to another experiment. PPC rockstars, on the other hand, will use techniques to look deeper to understand why the winner won and to find further opportunities to improve performance.

Responsive Search Ads provide a good example. In aggregate it’s easy to see if RSAs are good or bad for an ad group by looking at the typical KPIs like conversions, CPA, and ROAS (or better yet, profits). But what the ad group-level aggregate hides is what’s happening at the query level and in search, the query is everything.

I’ve written scripts and written about how to do a deeper analysis of RSAs and it can lead to some really interesting findings.

By combining a query report with an ad report, it’s possible to see when a query triggers only an ETA, only an RSA, or both. Then it can be measured how many conversions are lost by showing multiple ads and then the query can be split out with only the best performing ad type.

When we did this analysis for a client, we found one instance where they kept the wrong ad for a brand query. By fixing this single mistake, the branded query started delivering significantly more conversions for the same number of impressions.


I’ve written at length on the future of humans in a world where PPC is becoming more automated. Being really good at experimentation is one of the aspects where we can differentiate ourselves. And while tools make testing faster and more measurable, knowing what to test and how to interpret the results based on the many possible nuances and oddities of experiments is a very human skill and one we should all understand as much as possible.

The post 3 pitfalls of PPC experiments appeared first on Search Engine Land.

PPC automation will disrupt your business, but automation layering will save the day /ppc-automation-will-disrupt-your-business-but-automation-layering-will-save-the-day-323702 Fri, 18 Oct 2019 14:56:00 +0000 /?p=323702 Using automations from Google is inevitable, so if you want more control, you can regain it by layering in your own strategies.

The post PPC automation will disrupt your business, but automation layering will save the day appeared first on Search Engine Land.

As a finalist for the Google Premier Partner awards, I recently had the chance to visit Google’s campus in NYC. There, I spoke to a Googler about the state of agencies and he lamented the fact that a large wave of PPC agencies that got their start during the last economic downturn in 2008 are still managing accounts like they did 10 years ago… hardly a strategy for success when we might be on the verge of another economic slowdown.

Considering how much Google Ads has changed in the past decade, it stands to reason that successful account management should also have evolved dramatically. 

My friend and industry pioneer Andrew Goodman recently put this idea of agencies not keeping up with the times in slightly different terms and wrote that self-proclaimed PPC experts don’t keep up with the nuances of all that’s changed in Google Ads because they settle for good-enough rather than going for greatness.

So what follows are my thoughts on how to whip your PPC management skills into great shape for 2020 and beyond. The core idea is that using automations from Google is inevitable, so if you want more control, you can regain it by layering your own strategies on top of Google’s through a concept I call automation layering.

It’s not a question if, but when automation will do it better

Back in 2007, Google launched Conversion Optimizer which helped advertisers with at least 300 conversions over a 30 day period reach a CPA target for their ads. Today that strategy is called Target CPA and requires 20 times fewer conversions (15) to achieve similar results.

There are now also about 11 types of bid management strategies to choose from. Some, like Target CPA and Maximize Conversions almost seem like different flavors of the same thing, showing that things have progressed far enough that the differences in strategy can be down to nuanced differences in goals. It takes serious know-how to make sense of all those strategies and how they interact with manual controls such as bid adjustments.

The point is that there has been a lot of progress in automation for bids. And that’s not even to speak of automations in creatives, targeting, etc.

The creep of automation into all areas of PPC will continue unabated due to two driving forces:

  1. An ever-expanding trove of data. Every time a search happens, Google learns a bit more about the correlation between users and what ads resonate with them. The more data one has, the better the signal gets.
  2. Moore’s Law and the seemingly endless cycle of computing power doubling every 18 months. As computers can do more calculations with the vast data from point 1, they can make better predictions. And while even Gordon Moore believes his law will hit its limit by 2025, there are already teams working on ensuring sustained growth in computing power.

So please, figure out how automation fits into your business plan. Too often I hear it’s something people are too busy to figure out right now. “We might get to it another day,” they say… Well duh! If you used some automation you might actually have time to get strategic about your own business, and maybe even for the clients who pay your bills.

The time to prepare is now

Even if you don’t live in California like I do, you’ve probably read that PG&E, the utility company, turned off power to 800,000 customers to prevent catastrophic wildfires due to aging and poorly maintained transmission lines to avoid the type of wildfire that destroyed the town of Paradise in 2018. What does this have to do with PPC? Well, it showed me how bad people are at long-term thinking and planning.

Now that the power’s gone out, all of a sudden everyone’s upset at PG&E because households are in the dark. And while the utility company is certainly not blameless, they’ve been advising customers for months that this likely would happen. But until the warnings turned into reality, few consumers took time to prepare.

Google executives, industry bloggers and industry peers have all been saying that automation is coming to disrupt us. And yet too few agencies heed the warnings and will wait until it’s too late to change.

Complement rather than compete with automation

So if it’s inevitable that automation is here to stay, a modern account manager better figure out how to make it part of their routine. But if for some reason, you still believe that you can compete against automation and win, listen to this advice from Hal Varian, Google’s Chief Economist.

He said:

“If you are looking for a career where your services will be in high demand, you should find something where you provide a scarce, complementary service to something that is getting ubiquitous and cheap. So what’s getting ubiquitous and cheap? Data. And what is complementary to data? Analysis.”

You know what’s getting ubiquitous in PPC? Automation. Many of these automations are not just cheap, they’re free and available to all takers in Google’s own Ads management interface. Sounds exactly like the type of thing Varian was talking about. Heeding his advice, to me it seems smarter to become a great complement to automation than to fight it by continuing to do the same old things manually. 

Automated tools don’t guarantee success – you do

The reality of PPC automations is that results vary from one advertiser to the next so not everyone will see case-study-worthy results when deploying the latest Google smart feature. But on average, automated tools deliver better results with less effort, so if you find yourself as an outlier who’s not successful, you have to first determine why. Then you can plan a strategy to get on the right side of the averages.

For now a lot of the automation we see solves very narrow problems. There is one system for automating bids, another for optimizing ads, and yet another for matching those ads to likely prospects. The role of the agency is to piece the right solutions together and ensure they work well together. For a more in-depth look at this, check out my post about how the wrong combination of automations can destroy an account.

Through the example of making the mistake of using an attribution model like last-click that won’t work well with other automations, it should be clear that the results of automation can be extremely dependent on the knowledge and skill of the account manager. If you don’t see the results others are achieving, consider that the fault may not be with the tool but with the person using the tool.

As someone who builds PPC tools for a living, I know that even if the user is to blame for bad results, it’s still the tool creator’s problem. A tool that is amazing at automating ad testing is useless if it’s too difficult for the average advertiser to use without making mistakes. But the best advertisers don’t let shortcomings of technology get in their way and will work to produce amazing results with what’s out there.

Automation doesn’t mean set-it-and-forget-it

There’s this interesting cycle that tools from Google often seem to go through. They start out manual, then become automated, and eventually, some new controls are added. In essence, this turns the automation into a new type of manual tool. 

Bid management is a good example. First, we managed CPCs and bid adjustments manually. Then it became automated as a smart bidding strategy like target ROAS. But then Google added the ability to set ad group level targets, mobile bid adjustments, and seasonality bid adjustments. If you will, the automated target ROAS bid strategy is pretty manual if you consider all the settings you can now control.

Just the mere fact that there are several settings and controls for automated tools should be a dead giveaway that the automations can be optimized if advertisers are motivated enough.

I recently asked advertisers the last time they had an account with a single CPC bid. Or the last time they had manual CPC bids that they never changed. The answer for most was: “never!” Target CPA and target ROAS goals should likewise not be the same for the whole account, nor should they be static. They should be managed for better results, and certainly not treated as a set-it-and-forever-it tool.

Humans can make sense of the data

Curious why I say target CPA and target ROAS should be monitored and managed rather than left alone? We should do this because there are factors in everyone’s business that affect conversion rates that Google’s prediction systems may not be picking up on. The automation only detects a change in metrics and in its narrow scope of what it can do, may very well use this data to do entirely the wrong thing for the business.

Here’s an example that I’ve personally encountered… Automated bidding one day noticed that conversion rates dropped significantly and so bids were reduced in an effort to maintain the target CPA. As a result of the much lower bids, the advertiser’s conversion volume went off a cliff and never automatically recovered.

Here’s what happened. The bidding automation correctly saw a decline in conversion rate and adjusted bids downwards. But as a dumb automation, it never asked why the conversion rate dropped. The humans knew it was because a new landing page launched. Humans would have known the correct response to this event was to revert back to the previous landing page rather than decrease bids.

Machine learning is bad, very bad at explaining why and how. Show it a picture of a cat, and it knows it’s a cat. But good luck getting it to explain why it’s a cat.

In my book on the future of digital marketing, I explain that one of the roles humans need to play is that of PPC pilot, someone who monitors the automations and can make course corrections if bad data is causing bad actions to be taken.

Understand what the machine is doing and make sense of the data it is using to make its decisions. That is pretty much what Hal Varian said in the quote I mentioned before.

If automation isn’t working or you don’t trust it, consider automation layering 

If you believe that automation will become even more pervasive in PPC in the future, then it makes sense that we must learn how the automations given to us by the engines work so that we can optimize them by managing their settings. 

It’s the simple premise that humans + machines are better than machines alone. But I think in PPC there’s an additional meaning to that premise. Perhaps the equation should be: 

“Human (account manager at my company) + machine (automation created by Google) is better than machine alone.” 

An advertiser’s worry is not just that the machines are taking over our jobs, but it’s that the machines are built by Google, who also collects many of our advertising dollars. As an ex-Googler, I trust that Google tries to do the right thing, but there’s nothing wrong with wanting some guarantees and putting oversight in place, especially when even Googlers can’t really explain exactly how their machine learning automations are arriving at their decisions.

So that’s why PPC managers want to be in the equation rather than letting machines do PPC on their own. Humans can monitor the machine’s decisions and provide corrections and guidance when those decisions appear sub-optimal.

Script your PPC process for better results from automation

But adding manual human labor to the mix is counterproductive to efficiency and economic growth. Automation would be better. So what account managers should really strive for is to have their own automations to monitor and optimize the automations from the engines. That’s automation layering.

So instead of striving for:

Human + machine 

We should strive for:

Advertiser controlled automation + engine controlled automation

But as enthusiastic as marketers are about building their own automation (thanks to everyone who’s been downloading my scripts over the years), the truth is it doesn’t always come naturally. That’s when third-party tools can be helpful.

They help advertisers control what automations do to their accounts. I’ve explained how to use automation layering to monitor and control close variants that can muddle the meaning of exact match keywords. The following graph tries to explain it more conceptually but check out my previous post if you want a more tactical guide.

Close variants allow Google to target ads for exact match keywords (the small circle in the middle) to a much larger set of search terms (the big outer circle). Google is in control of how big they make this outer circle. If they want more revenue, they can literally change some settings in the code to make the circle bigger to make ad auctions more competitive. Advertisers want more certainty and control. So with automation layering where they control the automation, they can scale back the search terms (the dotted circle) to a level they feel comfortable with.

The bottom line

Automation from the engines is already disrupting PPC agencies and will continue to do so more. Anyone who’s been coasting and doing PPC like it’s 2008 needs to come to terms with the fact that this is not a future-proof strategy. Figure out how to bring automation into the mix of what you do. And if you haven’t because you don’t trust it, know that techniques like automation layering can restore some level of control.

The post PPC automation will disrupt your business, but automation layering will save the day appeared first on Search Engine Land.

A script to create a Google Ads Experiments dashboard /a-script-to-create-a-google-ads-experiments-dashboard-321818 Thu, 12 Sep 2019 17:07:28 +0000 /?p=321818 This script generates a dashboard with a comprehensive view of experiments across My Client Center.

The post A script to create a Google Ads Experiments dashboard appeared first on Search Engine Land.

Despite the constant addition of new automations to Google Ads, there’s still a lot for humans to do. For example, deciding the best combination of settings and automations that is likely to drive results for a particular campaign. What options work best may differ from vertical to vertical, and even account to account. That is why experimentation and testing is such a critical part of a PPC rockstar’s repertoire: from testing different bid strategies, to testing new ad formats like RSAs, to testing new ways to structure an account.

Sadly, testing at scale can be a bit of a pain due to the lack of an experiment dashboard in Google Ads. That’s why I’m sharing a free Google Ads script that generates an experiment dashboard and makes it easier for advertisers to stay on top of the experiments they’re doing.

Why we need an Experiment Dashboard

I wrote this script for a few reasons:

  1. There is no place in Google Ads to see all experiments across My Client Center. You can only see experiments for one account at a time as illustrated in the screenshot below.
  2. The experiment page shows only the most basic details. It doesn’t indicate winners and losers, or even show a comparison of metrics between the control and experiment. The only way to get that data is by clicking on each experiment to see its results and that seems like a waste of time.
Screenshot from Google Ads shows the limited data that is shown on an individual account’s Experiment page.

PPC rockstars are great at testing

I made the point in my book that testing is a key skill that sets a great PPC expert apart from a mediocre one. There are a few elements to being an expert tester and a PPC rockstar:

  1. Be smart about what you test. Know from experience what is a likely experiment to drive results and test that. Don’t waste money on ridiculous tests that are unlikely to drive significant positive outcomes.
  2. Know how to test rapidly. When experiments take too long to reach a conclusion, they are bottlenecking the account’s ability to improve. Two fairly common scenarios for when tests take too long come to mind: (1) when the base campaign doesn’t have enough volume and splitting the traffic will result in too little data over the course of a reasonable test period which will make identifying a winner take too long, or (2) there is lots of data but the control and experiment remain statistically tied and the experiment is allowed to continue to run past the time when a winner could have been identified if there were one.

Slow testing applied to Quality Score

When I worked at Google, one of the worst updates to the relevance system that I can remember was when we decided to give advertisers the ability to prove Google’s assessment of their keyword quality was wrong. I was on the team at the time so I had my share of the blame.

Before the change, keywords deemed irrelevant were simply disabled automatically and advertisers had no recourse. So to restore some control, it was decided that every advertiser would get a small allowance to test low relevance keywords. The idea was that rather than quickly disabling keywords, we’d let advertisers experiment with a few keywords the quality system was not fond of so that there’d be clear evidence whether the human or the machine was right.

The problem was that the system automatically picked which keywords to test from each advertiser’s account. Rather than prioritizing high volume keywords that could quickly be tested to conclusion, the system prioritized what it believed to be more relevant terms. But those tended to have lower volume and hence required a longer time to accrue enough data to come to a conclusive result. High volume keywords that could have been tested quickly simply got stuck in a queue behind keywords that took weeks to test and that made for unhappy advertisers.

In 2005, the concept of ‘in trial’ keywords was scrapped and modern Quality Score was introduced.

What the Experiment Dashboard script does

So testing is important, and doing it well is even more important. But when you have to stay on top of lots of tests, possibly across many accounts – like if you’re an agency – it can be a huge time waster because there is no one place to go and see all results. That is why I created this script.

The script fetches all the experiments across an MCC account or for an individual account (be sure to grab the appropriate version at the end of this post depending on whether you are placing it in an MCC or an individual account).

The script figures out the dates for each experiment – keeping in mind that each experiment may have run for a different time frame – and gets the key metrics like conversions, costs, etc. It also gets the traffic split between the control and experiment so that it can calculate the difference in observed performance as if the traffic had been split evenly.

Finally, it adds the data to a Google Sheet. One line per experiment, including the performance metrics. With this spreadsheet-based dashboard, advertisers can monitor performance across a multitude of experiments from a single view.

How to create an experiment in Google

Now that we have a better way to monitor experiments, let’s briefly cover the mechanics of how to create experiments in Google Ads.

Advertisers start by creating a draft campaign based on an existing campaign. Once the draft has been set up with everything the advertiser wants to test, it can be promoted to an experiment. When creating the experiment, the draft is allocated a percentage of traffic (the traffic split) and a start and end date. It then becomes an experiment and the base campaign becomes the control.

Future changes to the base campaign – the control – will not be applied to the experiment. Hence if you want meaningful results about the experiment, you need to be careful about what you change in the control or make changes to both the control and experiment at the same time.

Keep in mind you can and should make changes in the control. For example, when testing manual versus automated bidding, you should manage bids as you intend to in the control. But you shouldn’t add new keywords because that changes a factor unrelated to what you’re trying to measure as part of the experiment.

Once the experiment has ended, the advertiser can decide whether to discard or promote it and replace the control. If promoted, whatever is in the experiment gets moved to the original campaign and that then runs again on 100% of traffic until a new experiment is started.

Google provides the results of an experiment only by clicking into the experiment details.

Screenshot from Google Ads showing the difference in performance between the experiment and control.


Improving PPC accounts requires careful testing of the many new options that are constantly announced by Google. Accounts get better when winning experiments are merged into the account. And PPC rockstars get better by building knowledge of what works well, and when, and applying these findings to start higher quality experiments on future accounts.

I will be at SMX East on Nov. 13 to lead a session called “Awesome Tests, Profitable Results,” where I will share more details about this script as well as the methodology and results of my own experiments. I hope you’ll give the script a try and join me in New York for SMX.  

The script

Grab this code if you want a single account dashboard and will copy the code into an end-account:

Grab this code if you want an MCC experiment dashboard and will copy the code into the scripts section of an MCC account:


Towards the beginning of the script are lines that start with:

var spreadsheetUrl = "";


var sheetName = "Sheet1";

Be sure to add a link to your own Google Spreadsheet and add the name of the sheet (tab) where you want the dashboard to populate.

The post A script to create a Google Ads Experiments dashboard appeared first on Search Engine Land.

Are SKAGs dead after Google’s latest match type changes? /are-skags-dead-after-googles-latest-match-type-changes-320681 Mon, 19 Aug 2019 12:00:54 +0000 /?p=320681 PPC pros can now leverage new ad formats like RSAs to further boost auction-time relevance.

The post Are SKAGs dead after Google’s latest match type changes? appeared first on Search Engine Land.

Google recently announced that same-meaning close variants will soon apply to phrase match and modified broad match keywords. These match types join exact match, which was the first to start showing ads for close variants with the same meaning in September 2018.

I’ve shared what I believe are the three most important things to do ahead of this round of changes: automate bid management, enable a nonlast-click attribution model, and set up an automated monitoring system that checks close variants. You can even grab my script to automate the monitoring process.

But will Google’s change to how keyword match types work have any impact on other ways PPC accounts are managed? Or even more so, does this recent change obviate single keyword ad groups, commonly known as SKAGs? I’ll explain my reasoning in this post, but if you’re strapped for time, the answer I believe is that SKAGs will continue to play a useful role in boosting Quality Score, thereby reducing CPCs, and improving performance of PPC accounts.

What are SKAGs

SKAG stands for ‘single keyword ad group’. As the name suggests, it’s an ad group that has only one keyword in it, and often that one keyword will be an exact match keyword. For the purpose of this post, I am defining a SKAG exactly that way: an ad group with exactly one exact match keyword. This structure became popular as a way for advertisers to get more control over their accounts.

How SKAGs provide more control

Prior to the introduction of ‘close variants’ in 2014, using an exact match keyword meant the user had to type in exactly the same thing as the advertiser’s keyword in order for the ad to be eligible to appear. Thus by having only a single exact match keyword in an ad group, it meant the advertiser knew exactly what the user searched and so they could write a super targeted and relevant ad for that user’s search. 

SKAGs boost relevance and Quality Score

Having this control over ad relevance helped advertisers boost their Quality Score (QS). By focusing on a single keyword at a time, advertisers could write more compelling ads, thereby boosting CTR and hence achieving a higher QS.

And as I’ve written before, QS is a big part of how Google determines the Ad Rank and can make clicks cheaper when the QS increases.

“Ad Rank is a calculation of max CPC, quality score (expected CTR, ad relevance, landing page experience), the expected impact of ad extensions and ad formats as well as other contextual factors like location and device. It determines if your ad is eligible to show and where it appears on the page relative to other ads.”


What makes close variants the enemy of SKAGs

The meaning of match types has been radically redefined by close variants where exact match keywords no longer mean that the user’s query must be an exact letter-for-letter match to the advertiser’s keyword for an ad to show.

Advertisers can no longer write an ad that is guaranteed to show for exactly one query. So it seems that the point of doing SKAGs has been made impossible by Google. But while control has been reduced, SKAGs still help indicate advertiser preferences and hence can still boost QS and reduce PPC costs.

What remains the same: SKAGs help control which ad is shown

Here’s the thing, an exact match keyword is supposed to be given preference in the ad serving priority over another keyword that is a same-meaning close variant match of the user’s search. This has not changed. An advertiser who wants to show a specific ad for a specific query can still put that query as an exact match keyword in a SKAG and know that it is as likely as before to trigger the intended ad and deliver the associated quality score benefits.

Long-term PPC pros may argue with that point because as we all learn after doing PPC for a long time, the ad prioritization algorithm is tricky and can’t be entirely relied on. This is why my company Optmyzr has long had optimization tools to add negative keywords automatically when it is found that Google is serving a different ad than the one intended by the advertiser.

But the complexity and reliability of this process aside, the introduction of close variants hasn’t changed how Google picks the keyword that enters the auction when there are multiple possibilities. To understand this better, read Ginny Marvin’s recent explanation of keyword prioritization.

What has changed: SKAGs need to be managed like non-SKAGs

What has changed is that SKAGs now need to be managed a bit more like non-SKAGs because the exact match keyword can all of a sudden show ads for a wider variety of queries than before.

Advertisers can control which ad will show for a particular query by adding that query as an exact match keyword in a SKAG, but they cannot control for which additional close variants that same ad may now also get triggered without adding a lot of negative keywords.

The query management process that used to only apply to broad, modified broad and phrase match keywords is now equally important for exact match keywords in single keyword ad groups.

Query management in a close variant world

The reason Google has so many match types to choose from is that they know it’s near impossible for advertisers to guess all the unique ways users will formulate queries, even when looking for the same thing. Providing only exact match keywords would lead to many missed advertising opportunities.

Google has said since 2003 that 15% of queries are unique and they reaffirmed this stat with data from May 2019. That stat doesn’t actually mean what you might think though; the 2019 stat says that on any given day 15% of queries happen only once. In the past, that stat applied to longer windows of time, sometimes 90 days. So there are actually fewer unique searches now than in the past and that may have something to do with autocomplete but that’s a whole different topic.

In the end, it’s still an important stat because even if you were able to add every possible keyword to your account, some keywords would be disabled for low search volume. And some advertisers would run into keyword limits for their accounts. So whether we like it or not, everyone has to rely on Google’s algorithms at some level to show ads for all relevant searches.

But that doesn’t mean advertisers should stop managing keywords and queries and just depend on Google. PPC pros should look at the search terms reports to identify queries to add as negative or positive keywords.

I covered this in my book, Digital Marketing in an AI World. Even when so much is automated, some PPC tasks remain important enough for human oversight – I describe this as the PPC pro playing a role similar to that of airline pilots.

Do I still manage keywords when Google seems to automate it all?

Let me explain why query management is still critical by providing the possible counter-argument. One could say negative keywords are not important when using smart bidding because the Google system automatically deprioritize queries that don’t lead to conversions by setting lower bids and that would cause bad search terms to stop triggering ads. Likewise, it could be argued the Google system for close variants is so good at identifying relevant queries that it’s no longer important to spend time developing good keyword lists. ‘Why bother if Google won’t respect exact match keywords anyway?’ one could say.

But over-reliance on these automations may be risky. Consider that Google changes its algorithms and thresholds periodically. This means that what is considered “same-meaning” today may not tomorrow.

Another risk is conversion tracking may break, or bad data may enter the system due to technical issues like a landing page outage. Automations depend on good data to make good decisions and even the best PPC experts can’t guarantee with 100% confidence that their data will always be good. There are too many external factors and other players involved.

Only by explicitly telling Google which queries you want your ad to show for, and which other ones are unlikely to lead to a high-quality lead, can you provide a level of safety against the aforementioned issues.

As part of day-to-day query management which I believe is still an important task despite all Google’s automations, once you identify a great new keyword in a search terms report, it may make sense to add it as a keyword so you won’t have to depend on continued traffic for this important search term by hoping Google’s machine learning continues to guess it is relevant. Better still, by adding it in a SKAG with an ad written for just this term, its QS can be boosted and it may achieve a higher ad rank than when it triggered ads as a close variant match to another keyword. 

SKAGs need RSAs too

The purpose of a SKAG has always been to drive better QS with more relevant ads, and that hasn’t changed. The tools to do so, however, have. This has nothing to do with close variants, but everything to do with automated ads such as responsive search ads (RSAs).

Advertisers should add RSAs across all ad groups to drive more volume and better relevance. You can read more about how RSAs drive incremental volume and how to monitor their performance against expanded text ads.

Historically SKAGs have had several purpose-written expanded text ads to drive the best possible results for a single keyword. SKAGs should continue to contain these types of text ads. The text should still be written for the keyword without concern for the other queries it could now trigger due to same-meaning close variants. Continued query management as described above will ensure that good new queries get moved into their own SKAGs where they can also have a purpose-written, QS-boosting ad.

So won’t adding a responsive search ad to a SKAG dilute those QS benefits? It may seem so because RSAs are another example of advertisers ceding control to the machines. But the goal of RSAs is to show the most relevant ad so its goals are aligned with those of advertisers using SKAGs to improve QS.

When writing the components of the RSA (up to 15 headline variations and 4 descriptions), use ad text components that specifically reference the keyword itself as well as its benefits to users who did that particular search.

The advertiser’s task remains to write compelling text, it’s just that the text no longer has to be locked into a set order and the machines can recombine it in a way to maximize results.


SKAGs, or single keyword ad groups with one exact match keyword, continue to help advertisers craft a unique message for specific search terms. This can boost ad relevance, one of the components of Quality Score which can, in turn, reduce advertising costs.

Because exact match keywords may now trigger ads for a whole slew of same-meaning close variants, SKAGs now require search term management, something that wasn’t the case prior to 2014.

PPC pros who have used SKAGs before can continue to use them for the same purpose but can now leverage new ad formats like RSAs to further boost auction-time relevance.

So that’s the bottom-line, if you believed in SKAGs before, there’s no reason to stop believing in them now.

The post Are SKAGs dead after Google’s latest match type changes? appeared first on Search Engine Land.

Heard from Google, the biggest trends impacting PPC are privacy and automation /heard-from-google-the-biggest-trends-impacting-ppc-are-privacy-and-automation-319631 Thu, 18 Jul 2019 17:15:03 +0000 /?p=319631 The bottom line on privacy is that user trust and business data involving customers must be shared with care as third-party tools gain importance with different search engines.

The post Heard from Google, the biggest trends impacting PPC are privacy and automation appeared first on Search Engine Land.

We’re now well past this year’s Google Marketing Live event which was hosted in June in San Francisco. By now you’ve most likely heard about all the latest new betas and product changes – the “what” of what was announced. I myself am most excited about the additional controls that are coming to automated bidding. But what most media did not cover as much is the thinking behind the new launches. In other words, how did Google executives talk about the industry and why the new features address the biggest challenges advertisers face.

I believe taking a moment to reflect on some subtle cues of how the announcements were delivered can help us glean insights into Google’s inner workings. What is the ads team thinking? What might that mean for what they will build next? And what does it mean for us PPC pros as the landscape of Google Ads capabilities is constantly shifting?

I truly believe in the value of reading between the lines because even in my days as a Googler, I simply couldn’t know every detail of what Google Ads was doing, yet during Q&A sessions at conferences like SMX, I was expected to have answers. Often times, I had the answer because I knew how the product and engineering leaders thought, and what motivated them. All I had to do was connect the dots.

So let’s take a look at some of the statements made by Google executives at Google Marketing Live 2019 that I found interesting and what I think they may mean for the industry.

Greatest businesses in the world are founded on user trust

Prabhakar Raghavan, SVP of Google Ads, focused on the need for privacy, a growing concern among regulators. We’re all still catching our breath from the major changes we had to make to our websites in the past year with GDPR in Europe but this may just have been the beginning. In fact, US-based advertisers who didn’t worry about GDPR will almost certainly have to think about the impact of the California Consumer Privacy Act (CCPA) which goes into effect January 1, 2020.

Raghavan said that consumers have redefined their expectations and now expect to be able to seamlessly move across the web and across devices while having a personalized experience and at the same time have their privacy protected.

Prabhakar Raghavan, SVP of Google Ads, speaking about privacy at Google Marketing Live.

So Google is working on ways that they can continue to deliver relevant ads while using the least amount of user data says Raghavan. These are difficult problems to solve and at last year’s GML event we got a glimpse into the type of technology Google is building to solve these types of problems. For example, double-blind encryption technology lets multiple parties contribute data into a system that joins it together but where neither contributing party can get any personally identifiable data back out.

Raghavan says that the greatest businesses in the world are founded on user trust and Google obviously wants to be one of the world’s greatest companies.

One of the things you may have heard me repeat more than once is that we can make automated bidding based on machine learning (like tCPA and tROAS) better if we give it better signals about our business. It was summed up really well in a post recapping my session at SMX Advanced where I said something to the effect of: “We must focus on giving the machines the right goals in order to train them correctly.” But business data about conversions is usually about customers so sharing it with a third party like Google requires a lot of care to remove personally identifiable data.

The bottom line on privacy

As privacy concerns mount, and search engines take it more seriously, advertisers will find it more challenging to bring their data about what drives their business into the engines. We already saw customer match being scaled back due to privacy concerns related to unscrupulous advertisers submitting lists of users whose permission they lacked. Without this data, the machine learning can’t learn about meaningful signals and that means results from strategies that rely entirely on the engines will be sub-par to those that have found a way to combine internal ML with that of the engines.

I expect we’ll see more ways to bring our data into the engines through Azure from Microsoft or Ads Data Hub from Google. Unfortunately, it seems unlikely that we will be able to use technology from one engine to inform decisions on another engine (e.g. use Facebook Ads audience data to better target those users when they search on Google). To achieve that, third-party tools will gain importance.

The cloud is dead

To say that the cloud is dead seems like a crazy statement, right? I would have said so myself… after all, everything is moving to the cloud. What is not to like about having a supercomputer at your disposal to do things our own devices simply can’t? Privacy is the answer.

As powerful and useful as Amazon Alexa is, many people simply don’t want to be listened to all the time. And now that Echo devices usually have cameras, the creepiness factor of being watched constantly only goes up. But it’s thanks to the power of the cloud that Alexa can make sense of even my three-year old’s questions.

The bottom line on the future of the cloud

Part of the answer according to Google is federated learning, a way of doing machine learning where the user’s private training data never has to go into the cloud. There’s still going to be a cloud, but new ways have to be invented to give our own devices the capability to do things locally so that all private data can be kept secure locally. We may also see terminals like echo devices and nest devices become more powerful again. Whereas we had a trend towards doing more processing in the cloud, now we may start to see a reversal caused by privacy concerns.

Creating a great ad is hard

This was said by Nicky Rettke, director of product management for YouTube Ads. Creating a great ad is one of the most common challenges Google hears from advertisers. And while she’s talking about YouTube, the same holds for search ads as well. We have an audit tool in Optmyzr (my company) and one of the structural checks it can run on accounts is to look for excessive usage of the same headlines or descriptions across many ad groups. I’ve seen accounts spending well in excess of $1 million per month on Google Ads where thousands of ad groups all use the same headline.

Mike Rhodes, a PPC agency founder and smart friend of mine, said that perhaps it’s because if advertisers ran many different variations across their account, they’d find it harder to update all those ads when a new directive came in from the company’s branding team, or when new promotions were launched.

Regardless of the reason, Nicky’s on to something when she says that creating ads, let alone “great” ads is not usually top of mind for advertisers. Yet when I asked PPC pros during a recent #ppcchat on Twitter what they were least likely to trust to automation, they said it was creating ads. So it’s a task the humans often skip, and they’re not willing to let the machines help them. Quite the conundrum.

The bottom line on writing better ads

Google knows humans are too busy to write great ads at scale. Yet humans don’t believe ML can do that job for them. What we’ll see are more hybrid solutions where the machine provides suggestions and makes it easy for the human to edit and deploy them at scale. RSAs are another good example: the humans provide the machine with relevant options to choose from but the engine’s ML has the freedom to combine those human-suggested elements in whatever way it believes will create the most relevant experience for the user.

Don’t ask ‘if’ automation will disrupt your business, but rather ‘when’

This was said by Todd Rowe, global managing director for Google Marketing Solutions. That same sentiment was expressed by Ginny Marvin during her keynote at SMX Advanced in June. The reality is that ML gets better as it gets access to more data and as computing power continues to rise.

Todd believes there’s about a two-year time frame before new technology, like automation in PPC, will be disruptive. That means advertising professionals have roughly two years to figure out how they will work with a new technology. If they wait longer, that new technology may cost them their livelihoods. Dire, right?

Here’s the thing though… we don’t have to be the victims of automation. We can use it to build better agencies and stronger PPC teams.

Thinking about the impact of automation on PPC has continued to evolve as has my own thinking because part of what PPC pros need to do is create their own automation.

Todd makes a similar point and says that agencies need to think of how to automate their agency process.

The ad engines build incredibly powerful automations using the latest in machine learning. Most advertisers simply can’t compete and build a better automation, so rather than compete, they should determine how to complement the technology. I think the answer is “automation layering.”’

In one example of automation layering, the engine handles bidding using target CPA Smart Bidding and the advertiser layers on their own automations, even simple ones like automated rules and alerts that let them know when Smart Bidding is starting to fail due to some unexpected factors affecting conversion rates, like a flash sale or an outage affecting conversion tracking.

The bottom line on PPC in an automated world

Automation is here to stay and the PPC pro’s role will change in the next two to five years. Even some of the most successful practitioners are delivering great results with simple automations of their own because for every simple but time-consuming task they automate, they gain time to experiment with all the new stuff Google keeps announcing and they get to the head of the pack and become the sought-after thought leaders in PPC.


I learned a tremendous amount at Google Marketing Live and only wish I’d had more time to attend more sessions so I could have shared more in this post. Tools and features aside, the biggest trends we heard about at the event are about privacy, machine learning and how humans fit into this ever-evolving picture.

The post Heard from Google, the biggest trends impacting PPC are privacy and automation appeared first on Search Engine Land.

Automation layering: How PPC pros retain control when automation takes over /automation-layering-how-ppc-pros-retain-control-when-automation-takes-over-318592 Fri, 21 Jun 2019 15:06:36 +0000 /?p=318592 By replacing the manual work done by the PPC expert with an automation that follows their logic, PPC teams can still have more control over automations created by the ad engines.

The post Automation layering: How PPC pros retain control when automation takes over appeared first on Search Engine Land.

The PPC track at SMX Advanced kicked off with a keynote by Ginny Marvin where she considered the future of an industry where automation is increasingly doing more of the work humans used to do. Her message was that we can’t escape automation so we must find a way to coexist with the machines.

The topic of automation in PPC comes up a lot but I suspect that when our industry talks about the impact of automation, what is considered are mostly automations built by the likes of Google and Microsoft… disruptive (but not necessarily bad) capabilities like Smart Bidding, close variant keywords, responsive search ads, etc.

But nobody ever said that advertisers can’t be disruptors too. They too can build automations to change the game and give themselves a competitive edge.

Having to build your own automations may sound daunting but remember that they don’t have to be cutting-edge like machine learning in order to be useful. In this post, I’ll explain an easy way to get started with your own automations using the principle of “automation layering.”

Automations from the engines are better with human help

In my new book, Digital Marketing in an AI World, I explain that humans plus machines usually perform better than machines alone. This is not a new concept and one most of you have probably come across in some form or other. One specific example I used to share in presentations came from Wired in 2014 and said that, “The National Weather Service employs meteorologists who, understanding the dynamics of weather systems, can improve forecasts by as much as 25 percent compared with computers alone.”

Because of the potential for better results, PPC pros want to remain involved. They have knowledge about the business that could meaningfully impact results. Sometimes there simply is not enough data for a machine learning system to come up with the same insight. So it’s generally agreed upon that humans + machines can outperform machines alone.

Generally, we tend to translate this concept into the PPC world by saying that account managers need to work together with automations from the engines.

When humans work together with automations from
the ad engines like Google, the results are generally thought to be better than if the automation didn’t have the help of a smart PPC account manager.

Automations from the engines are better with automations from advertisers

Then I started thinking about the role human PPC managers need to play for the premise to be true that humans + machines outperform machines alone. I realized that the humans in that equation could actually be replaced by machines as well, but in this case, machines that are controlled by the PPC pro and not the ad engine. PPC pros could benefit from the control (since they define the automation) and the time savings (because they don’t need to exert control manually).

So we should try to replace some forms of human control with new layers of automation and see if that delivers the same benefits as humans + machines. If we can write down the steps we take, we can teach a machine to do those steps for us. And it can be a simple rule-based approach which is much simpler to create than something based on machine learning.

Humans don’t need to do repetitive manual work to help the automations from the engines. They can teach their own machines to automate their process.

The concept behind automation layering is not a new idea. In engineering, solutions can be broken down into systems that can themselves be connected to other systems. Each system accepts inputs and returns outputs and so long as there is agreement over the format of inputs and outputs, many systems can be strung together and work seamlessly together to solve more complex problems.

Likewise, an automation could interact with other automations. In PPC, let’s call this principle “automation layering.”  This is an important concept because it’s the next evolution of what PPC pros have been doing for years: using their own insights to control what Google does. But just like Google is getting ever-more automated, our control over it should also become more automated.

By replacing the manual work done by the PPC expert with an automation
that follows their logic, PPC teams can still reap the benefits of having more control over automations created by the ad engines.

Let’s look at why automation layering makes sense in PPC.

Escaping automation is not an option

The reason humans worry about automations created by the engines is that we can’t escape these. They are launched at the engine’s discretion and whether we like it or not, we have to spend time figuring out how they impact our work. Given how busy the typical PPC manager is, this extra work is not something to look forward to.

Despite promising great things, the truth is that success with new automations depends on experimentation and reskilling, both tasks that require time to do well. To take an example from aviation, cutting corners with reskilling when new automations are launched can lead to disastrous results as seen with the 737-Max. Luckily in PPC the stakes are not as high, but I believe the analogy is relevant.

Automation layering for close variants

Some new automations cannot be turned off so they force us to change how we work with Google Ads. Close variants are a recent example of this type of change. In September of last year, they redefined what different keyword match types, like “exact match” mean.

Some account managers now spend extra time monitoring search terms triggered for exact match keywords. This would be a great form of human control to turn into automation layering where the PPC manager turns their structured logic for how they check close variants into an automation that does it automatically.

There are two specific ways I’ve shared to layer an automation on top of Google’s exact match keywords to keep control when they expand to close variants with similar meaning.

The first way is to simply check the performance of the close variant to that of the underlying exact keyword. If a user-defined threshold for performance is met, it can automatically be added as a new keyword with its own bid, or as a negative keyword if the performance is significantly lower. Note that close variants when used in conjunction with Smart Bidding should already get the appropriate bid to meet CPA or ROAS targets, but regardless it can’t hurt to add your own layer of automation to confirm this.

The second way is to use the Levenshtein distance calculation to find how far the close variant is from the exact keyword. It is a simple calculation that adds up the number of text changes required to go from one word to another. Every character added, deleted, or changed adds one point. Hence going from the correct spelling of my company name “Optmyzr” to the common typo “Optmyzer” has a Levenshtein distance of 1 (for the addition of the letter “e”). Going from the word “campsite” to “campground” on the other hand has a score of 6 because 4 letters need to be changed and 2 need to be added.

Layer your own automation on top of close variants to determine how different the close variant is to the exact match keyword. The Levenshtein distance function can be used to calculate the number of text changes required to go from one text string to another.

With a Google Ads script, we could write our own automation that turns these manual checks into fully automated ones. Because it’s an automation that we can define, it’s as powerful as the more manual human control that we used to have to put in to get the benefits normally associated with humans + machines.

Automation layering for Smart Bidding

Other automations like Smart Bidding are optional but with their pace of improvements, it’s just a matter of time before even the most ardent fans of doing PPC manually simply won’t be able to make enough of a difference that they can charge a living wage for their manual bid management services.

The machines are simply better at doing the math that predicts future conversions and using this expected conversion rate to turn an advertiser’s business goals around CPA or ROAS into a CPC bid that the ad auction can use to rank the ad against all others.

That said, remember that Smart Bidding is not the same as automated bidding. Part of the bid management process is automated, but there’s still work for humans to do. Things like setting goals and ensuring measurement is working are just two examples of these tasks.

Smart bidding doesn’t mean the entire bid management process is automated. Account managers still need to control dials for seasonality, conversion types, and fluctuating margins. These well-defined processes are great things to automate so they can be layered on Google’s Smart Bidding automation.

Besides needing to dial in adjustments for seasonality, special promotions and figuring out how to connect these limited controls to business goals like acquiring new customers, driving store visits or driving higher repeat sales, there’s still the point that most companies care about profits. Despite what we may think after hearing of Uber’s $1 billion quarterly loss, the reality is that most companies don’t have hordes of cash from VCs and a recent IPO so profits are what helps these businesses grow. Curiously, Google Ads doesn’t really have a Smart Bidding strategy geared towards profits.

So it’s up to the human PPC pro to bridge that gap and perhaps add some automation layering. One way to drive towards profitable PPC is to take margins into account when setting ROAS goals.

More profitable items (the ones with higher margins) can have lower ROAS targets. Remember ROAS in Google is “conv value/cost” (i.e., conversion value divided by ad costs). Assuming the conversion value is the cart value of the sale, for an item with a better margin more of that cart value is the product markup. So a lower ROAS can still deliver a profit whereas for items with low margins, less of the cart value is the markup and hence a higher ROAS is needed to break even.

PPC pros could manually assign different products to different smart shopping campaigns with different ROAS targets but that would be tedious and time consuming, especially if the margins for existing products were to change due to promotions and sales events. A smarter solution would be to apply automation layering and use a tool or script that sends products automatically to the right smart shopping campaigns where Google’s automations could take over.


The engines are automating many things we used to have lots of control over because we used to do them manually: from finding new keywords, to setting better bids, to writing ads. But when the people behind the businesses that advertise on Google get a say, results can be better than if the engine’s automation runs entirely on its own.

Just like Google is adding automations, so should you. Use the concept of Automation Layering to your advantage to retain the level of control you’re used to while also saving time by letting the machines do your work.

The post Automation layering: How PPC pros retain control when automation takes over appeared first on Search Engine Land.