Frederick Vallaeys – Search Engine Land News On Search Engines, Search Engine Optimization (SEO) & Search Engine Marketing (SEM) Tue, 24 Sep 2019 19:31:49 +0000 en-US hourly 1 A script to create a Google Ads Experiments dashboard /a-script-to-create-a-google-ads-experiments-dashboard-321818 Thu, 12 Sep 2019 17:07:28 +0000 /?p=321818 This script generates a dashboard with a comprehensive view of experiments across My Client Center.

The post A script to create a Google Ads Experiments dashboard appeared first on Search Engine Land.

Despite the constant addition of new automations to Google Ads, there’s still a lot for humans to do. For example, deciding the best combination of settings and automations that is likely to drive results for a particular campaign. What options work best may differ from vertical to vertical, and even account to account. That is why experimentation and testing is such a critical part of a PPC rockstar’s repertoire: from testing different bid strategies, to testing new ad formats like RSAs, to testing new ways to structure an account.

Sadly, testing at scale can be a bit of a pain due to the lack of an experiment dashboard in Google Ads. That’s why I’m sharing a free Google Ads script that generates an experiment dashboard and makes it easier for advertisers to stay on top of the experiments they’re doing.

Why we need an Experiment Dashboard

I wrote this script for a few reasons:

  1. There is no place in Google Ads to see all experiments across My Client Center. You can only see experiments for one account at a time as illustrated in the screenshot below.
  2. The experiment page shows only the most basic details. It doesn’t indicate winners and losers, or even show a comparison of metrics between the control and experiment. The only way to get that data is by clicking on each experiment to see its results and that seems like a waste of time.
Screenshot from Google Ads shows the limited data that is shown on an individual account’s Experiment page.

PPC rockstars are great at testing

I made the point in my book that testing is a key skill that sets a great PPC expert apart from a mediocre one. There are a few elements to being an expert tester and a PPC rockstar:

  1. Be smart about what you test. Know from experience what is a likely experiment to drive results and test that. Don’t waste money on ridiculous tests that are unlikely to drive significant positive outcomes.
  2. Know how to test rapidly. When experiments take too long to reach a conclusion, they are bottlenecking the account’s ability to improve. Two fairly common scenarios for when tests take too long come to mind: (1) when the base campaign doesn’t have enough volume and splitting the traffic will result in too little data over the course of a reasonable test period which will make identifying a winner take too long, or (2) there is lots of data but the control and experiment remain statistically tied and the experiment is allowed to continue to run past the time when a winner could have been identified if there were one.

Slow testing applied to Quality Score

When I worked at Google, one of the worst updates to the relevance system that I can remember was when we decided to give advertisers the ability to prove Google’s assessment of their keyword quality was wrong. I was on the team at the time so I had my share of the blame.

Before the change, keywords deemed irrelevant were simply disabled automatically and advertisers had no recourse. So to restore some control, it was decided that every advertiser would get a small allowance to test low relevance keywords. The idea was that rather than quickly disabling keywords, we’d let advertisers experiment with a few keywords the quality system was not fond of so that there’d be clear evidence whether the human or the machine was right.

The problem was that the system automatically picked which keywords to test from each advertiser’s account. Rather than prioritizing high volume keywords that could quickly be tested to conclusion, the system prioritized what it believed to be more relevant terms. But those tended to have lower volume and hence required a longer time to accrue enough data to come to a conclusive result. High volume keywords that could have been tested quickly simply got stuck in a queue behind keywords that took weeks to test and that made for unhappy advertisers.

In 2005, the concept of ‘in trial’ keywords was scrapped and modern Quality Score was introduced.

What the Experiment Dashboard script does

So testing is important, and doing it well is even more important. But when you have to stay on top of lots of tests, possibly across many accounts – like if you’re an agency – it can be a huge time waster because there is no one place to go and see all results. That is why I created this script.

The script fetches all the experiments across an MCC account or for an individual account (be sure to grab the appropriate version at the end of this post depending on whether you are placing it in an MCC or an individual account).

The script figures out the dates for each experiment – keeping in mind that each experiment may have run for a different time frame – and gets the key metrics like conversions, costs, etc. It also gets the traffic split between the control and experiment so that it can calculate the difference in observed performance as if the traffic had been split evenly.

Finally, it adds the data to a Google Sheet. One line per experiment, including the performance metrics. With this spreadsheet-based dashboard, advertisers can monitor performance across a multitude of experiments from a single view.

How to create an experiment in Google

Now that we have a better way to monitor experiments, let’s briefly cover the mechanics of how to create experiments in Google Ads.

Advertisers start by creating a draft campaign based on an existing campaign. Once the draft has been set up with everything the advertiser wants to test, it can be promoted to an experiment. When creating the experiment, the draft is allocated a percentage of traffic (the traffic split) and a start and end date. It then becomes an experiment and the base campaign becomes the control.

Future changes to the base campaign – the control – will not be applied to the experiment. Hence if you want meaningful results about the experiment, you need to be careful about what you change in the control or make changes to both the control and experiment at the same time.

Keep in mind you can and should make changes in the control. For example, when testing manual versus automated bidding, you should manage bids as you intend to in the control. But you shouldn’t add new keywords because that changes a factor unrelated to what you’re trying to measure as part of the experiment.

Once the experiment has ended, the advertiser can decide whether to discard or promote it and replace the control. If promoted, whatever is in the experiment gets moved to the original campaign and that then runs again on 100% of traffic until a new experiment is started.

Google provides the results of an experiment only by clicking into the experiment details.

Screenshot from Google Ads showing the difference in performance between the experiment and control.


Improving PPC accounts requires careful testing of the many new options that are constantly announced by Google. Accounts get better when winning experiments are merged into the account. And PPC rockstars get better by building knowledge of what works well, and when, and applying these findings to start higher quality experiments on future accounts.

I will be at SMX East on Nov. 13 to lead a session called “Awesome Tests, Profitable Results,” where I will share more details about this script as well as the methodology and results of my own experiments. I hope you’ll give the script a try and join me in New York for SMX.  

The script

Grab this code if you want a single account dashboard and will copy the code into an end-account:

Grab this code if you want an MCC experiment dashboard and will copy the code into the scripts section of an MCC account:


Towards the beginning of the script are lines that start with:

var spreadsheetUrl = "";


var sheetName = "Sheet1";

Be sure to add a link to your own Google Spreadsheet and add the name of the sheet (tab) where you want the dashboard to populate.

The post A script to create a Google Ads Experiments dashboard appeared first on Search Engine Land.

Are SKAGs dead after Google’s latest match type changes? /are-skags-dead-after-googles-latest-match-type-changes-320681 Mon, 19 Aug 2019 12:00:54 +0000 /?p=320681 PPC pros can now leverage new ad formats like RSAs to further boost auction-time relevance.

The post Are SKAGs dead after Google’s latest match type changes? appeared first on Search Engine Land.

Google recently announced that same-meaning close variants will soon apply to phrase match and modified broad match keywords. These match types join exact match, which was the first to start showing ads for close variants with the same meaning in September 2018.

I’ve shared what I believe are the three most important things to do ahead of this round of changes: automate bid management, enable a nonlast-click attribution model, and set up an automated monitoring system that checks close variants. You can even grab my script to automate the monitoring process.

But will Google’s change to how keyword match types work have any impact on other ways PPC accounts are managed? Or even more so, does this recent change obviate single keyword ad groups, commonly known as SKAGs? I’ll explain my reasoning in this post, but if you’re strapped for time, the answer I believe is that SKAGs will continue to play a useful role in boosting Quality Score, thereby reducing CPCs, and improving performance of PPC accounts.

What are SKAGs

SKAG stands for ‘single keyword ad group’. As the name suggests, it’s an ad group that has only one keyword in it, and often that one keyword will be an exact match keyword. For the purpose of this post, I am defining a SKAG exactly that way: an ad group with exactly one exact match keyword. This structure became popular as a way for advertisers to get more control over their accounts.

How SKAGs provide more control

Prior to the introduction of ‘close variants’ in 2014, using an exact match keyword meant the user had to type in exactly the same thing as the advertiser’s keyword in order for the ad to be eligible to appear. Thus by having only a single exact match keyword in an ad group, it meant the advertiser knew exactly what the user searched and so they could write a super targeted and relevant ad for that user’s search. 

SKAGs boost relevance and Quality Score

Having this control over ad relevance helped advertisers boost their Quality Score (QS). By focusing on a single keyword at a time, advertisers could write more compelling ads, thereby boosting CTR and hence achieving a higher QS.

And as I’ve written before, QS is a big part of how Google determines the Ad Rank and can make clicks cheaper when the QS increases.

“Ad Rank is a calculation of max CPC, quality score (expected CTR, ad relevance, landing page experience), the expected impact of ad extensions and ad formats as well as other contextual factors like location and device. It determines if your ad is eligible to show and where it appears on the page relative to other ads.”


What makes close variants the enemy of SKAGs

The meaning of match types has been radically redefined by close variants where exact match keywords no longer mean that the user’s query must be an exact letter-for-letter match to the advertiser’s keyword for an ad to show.

Advertisers can no longer write an ad that is guaranteed to show for exactly one query. So it seems that the point of doing SKAGs has been made impossible by Google. But while control has been reduced, SKAGs still help indicate advertiser preferences and hence can still boost QS and reduce PPC costs.

What remains the same: SKAGs help control which ad is shown

Here’s the thing, an exact match keyword is supposed to be given preference in the ad serving priority over another keyword that is a same-meaning close variant match of the user’s search. This has not changed. An advertiser who wants to show a specific ad for a specific query can still put that query as an exact match keyword in a SKAG and know that it is as likely as before to trigger the intended ad and deliver the associated quality score benefits.

Long-term PPC pros may argue with that point because as we all learn after doing PPC for a long time, the ad prioritization algorithm is tricky and can’t be entirely relied on. This is why my company Optmyzr has long had optimization tools to add negative keywords automatically when it is found that Google is serving a different ad than the one intended by the advertiser.

But the complexity and reliability of this process aside, the introduction of close variants hasn’t changed how Google picks the keyword that enters the auction when there are multiple possibilities. To understand this better, read Ginny Marvin’s recent explanation of keyword prioritization.

What has changed: SKAGs need to be managed like non-SKAGs

What has changed is that SKAGs now need to be managed a bit more like non-SKAGs because the exact match keyword can all of a sudden show ads for a wider variety of queries than before.

Advertisers can control which ad will show for a particular query by adding that query as an exact match keyword in a SKAG, but they cannot control for which additional close variants that same ad may now also get triggered without adding a lot of negative keywords.

The query management process that used to only apply to broad, modified broad and phrase match keywords is now equally important for exact match keywords in single keyword ad groups.

Query management in a close variant world

The reason Google has so many match types to choose from is that they know it’s near impossible for advertisers to guess all the unique ways users will formulate queries, even when looking for the same thing. Providing only exact match keywords would lead to many missed advertising opportunities.

Google has said since 2003 that 15% of queries are unique and they reaffirmed this stat with data from May 2019. That stat doesn’t actually mean what you might think though; the 2019 stat says that on any given day 15% of queries happen only once. In the past, that stat applied to longer windows of time, sometimes 90 days. So there are actually fewer unique searches now than in the past and that may have something to do with autocomplete but that’s a whole different topic.

In the end, it’s still an important stat because even if you were able to add every possible keyword to your account, some keywords would be disabled for low search volume. And some advertisers would run into keyword limits for their accounts. So whether we like it or not, everyone has to rely on Google’s algorithms at some level to show ads for all relevant searches.

But that doesn’t mean advertisers should stop managing keywords and queries and just depend on Google. PPC pros should look at the search terms reports to identify queries to add as negative or positive keywords.

I covered this in my book, Digital Marketing in an AI World. Even when so much is automated, some PPC tasks remain important enough for human oversight – I describe this as the PPC pro playing a role similar to that of airline pilots.

Do I still manage keywords when Google seems to automate it all?

Let me explain why query management is still critical by providing the possible counter-argument. One could say negative keywords are not important when using smart bidding because the Google system automatically deprioritize queries that don’t lead to conversions by setting lower bids and that would cause bad search terms to stop triggering ads. Likewise, it could be argued the Google system for close variants is so good at identifying relevant queries that it’s no longer important to spend time developing good keyword lists. ‘Why bother if Google won’t respect exact match keywords anyway?’ one could say.

But over-reliance on these automations may be risky. Consider that Google changes its algorithms and thresholds periodically. This means that what is considered “same-meaning” today may not tomorrow.

Another risk is conversion tracking may break, or bad data may enter the system due to technical issues like a landing page outage. Automations depend on good data to make good decisions and even the best PPC experts can’t guarantee with 100% confidence that their data will always be good. There are too many external factors and other players involved.

Only by explicitly telling Google which queries you want your ad to show for, and which other ones are unlikely to lead to a high-quality lead, can you provide a level of safety against the aforementioned issues.

As part of day-to-day query management which I believe is still an important task despite all Google’s automations, once you identify a great new keyword in a search terms report, it may make sense to add it as a keyword so you won’t have to depend on continued traffic for this important search term by hoping Google’s machine learning continues to guess it is relevant. Better still, by adding it in a SKAG with an ad written for just this term, its QS can be boosted and it may achieve a higher ad rank than when it triggered ads as a close variant match to another keyword. 

SKAGs need RSAs too

The purpose of a SKAG has always been to drive better QS with more relevant ads, and that hasn’t changed. The tools to do so, however, have. This has nothing to do with close variants, but everything to do with automated ads such as responsive search ads (RSAs).

Advertisers should add RSAs across all ad groups to drive more volume and better relevance. You can read more about how RSAs drive incremental volume and how to monitor their performance against expanded text ads.

Historically SKAGs have had several purpose-written expanded text ads to drive the best possible results for a single keyword. SKAGs should continue to contain these types of text ads. The text should still be written for the keyword without concern for the other queries it could now trigger due to same-meaning close variants. Continued query management as described above will ensure that good new queries get moved into their own SKAGs where they can also have a purpose-written, QS-boosting ad.

So won’t adding a responsive search ad to a SKAG dilute those QS benefits? It may seem so because RSAs are another example of advertisers ceding control to the machines. But the goal of RSAs is to show the most relevant ad so its goals are aligned with those of advertisers using SKAGs to improve QS.

When writing the components of the RSA (up to 15 headline variations and 4 descriptions), use ad text components that specifically reference the keyword itself as well as its benefits to users who did that particular search.

The advertiser’s task remains to write compelling text, it’s just that the text no longer has to be locked into a set order and the machines can recombine it in a way to maximize results.


SKAGs, or single keyword ad groups with one exact match keyword, continue to help advertisers craft a unique message for specific search terms. This can boost ad relevance, one of the components of Quality Score which can, in turn, reduce advertising costs.

Because exact match keywords may now trigger ads for a whole slew of same-meaning close variants, SKAGs now require search term management, something that wasn’t the case prior to 2014.

PPC pros who have used SKAGs before can continue to use them for the same purpose but can now leverage new ad formats like RSAs to further boost auction-time relevance.

So that’s the bottom-line, if you believed in SKAGs before, there’s no reason to stop believing in them now.

The post Are SKAGs dead after Google’s latest match type changes? appeared first on Search Engine Land.

Heard from Google, the biggest trends impacting PPC are privacy and automation /heard-from-google-the-biggest-trends-impacting-ppc-are-privacy-and-automation-319631 Thu, 18 Jul 2019 17:15:03 +0000 /?p=319631 The bottom line on privacy is that user trust and business data involving customers must be shared with care as third-party tools gain importance with different search engines.

The post Heard from Google, the biggest trends impacting PPC are privacy and automation appeared first on Search Engine Land.

We’re now well past this year’s Google Marketing Live event which was hosted in June in San Francisco. By now you’ve most likely heard about all the latest new betas and product changes – the “what” of what was announced. I myself am most excited about the additional controls that are coming to automated bidding. But what most media did not cover as much is the thinking behind the new launches. In other words, how did Google executives talk about the industry and why the new features address the biggest challenges advertisers face.

I believe taking a moment to reflect on some subtle cues of how the announcements were delivered can help us glean insights into Google’s inner workings. What is the ads team thinking? What might that mean for what they will build next? And what does it mean for us PPC pros as the landscape of Google Ads capabilities is constantly shifting?

I truly believe in the value of reading between the lines because even in my days as a Googler, I simply couldn’t know every detail of what Google Ads was doing, yet during Q&A sessions at conferences like SMX, I was expected to have answers. Often times, I had the answer because I knew how the product and engineering leaders thought, and what motivated them. All I had to do was connect the dots.

So let’s take a look at some of the statements made by Google executives at Google Marketing Live 2019 that I found interesting and what I think they may mean for the industry.

Greatest businesses in the world are founded on user trust

Prabhakar Raghavan, SVP of Google Ads, focused on the need for privacy, a growing concern among regulators. We’re all still catching our breath from the major changes we had to make to our websites in the past year with GDPR in Europe but this may just have been the beginning. In fact, US-based advertisers who didn’t worry about GDPR will almost certainly have to think about the impact of the California Consumer Privacy Act (CCPA) which goes into effect January 1, 2020.

Raghavan said that consumers have redefined their expectations and now expect to be able to seamlessly move across the web and across devices while having a personalized experience and at the same time have their privacy protected.

Prabhakar Raghavan, SVP of Google Ads, speaking about privacy at Google Marketing Live.

So Google is working on ways that they can continue to deliver relevant ads while using the least amount of user data says Raghavan. These are difficult problems to solve and at last year’s GML event we got a glimpse into the type of technology Google is building to solve these types of problems. For example, double-blind encryption technology lets multiple parties contribute data into a system that joins it together but where neither contributing party can get any personally identifiable data back out.

Raghavan says that the greatest businesses in the world are founded on user trust and Google obviously wants to be one of the world’s greatest companies.

One of the things you may have heard me repeat more than once is that we can make automated bidding based on machine learning (like tCPA and tROAS) better if we give it better signals about our business. It was summed up really well in a post recapping my session at SMX Advanced where I said something to the effect of: “We must focus on giving the machines the right goals in order to train them correctly.” But business data about conversions is usually about customers so sharing it with a third party like Google requires a lot of care to remove personally identifiable data.

The bottom line on privacy

As privacy concerns mount, and search engines take it more seriously, advertisers will find it more challenging to bring their data about what drives their business into the engines. We already saw customer match being scaled back due to privacy concerns related to unscrupulous advertisers submitting lists of users whose permission they lacked. Without this data, the machine learning can’t learn about meaningful signals and that means results from strategies that rely entirely on the engines will be sub-par to those that have found a way to combine internal ML with that of the engines.

I expect we’ll see more ways to bring our data into the engines through Azure from Microsoft or Ads Data Hub from Google. Unfortunately, it seems unlikely that we will be able to use technology from one engine to inform decisions on another engine (e.g. use Facebook Ads audience data to better target those users when they search on Google). To achieve that, third-party tools will gain importance.

The cloud is dead

To say that the cloud is dead seems like a crazy statement, right? I would have said so myself… after all, everything is moving to the cloud. What is not to like about having a supercomputer at your disposal to do things our own devices simply can’t? Privacy is the answer.

As powerful and useful as Amazon Alexa is, many people simply don’t want to be listened to all the time. And now that Echo devices usually have cameras, the creepiness factor of being watched constantly only goes up. But it’s thanks to the power of the cloud that Alexa can make sense of even my three-year old’s questions.

The bottom line on the future of the cloud

Part of the answer according to Google is federated learning, a way of doing machine learning where the user’s private training data never has to go into the cloud. There’s still going to be a cloud, but new ways have to be invented to give our own devices the capability to do things locally so that all private data can be kept secure locally. We may also see terminals like echo devices and nest devices become more powerful again. Whereas we had a trend towards doing more processing in the cloud, now we may start to see a reversal caused by privacy concerns.

Creating a great ad is hard

This was said by Nicky Rettke, director of product management for YouTube Ads. Creating a great ad is one of the most common challenges Google hears from advertisers. And while she’s talking about YouTube, the same holds for search ads as well. We have an audit tool in Optmyzr (my company) and one of the structural checks it can run on accounts is to look for excessive usage of the same headlines or descriptions across many ad groups. I’ve seen accounts spending well in excess of $1 million per month on Google Ads where thousands of ad groups all use the same headline.

Mike Rhodes, a PPC agency founder and smart friend of mine, said that perhaps it’s because if advertisers ran many different variations across their account, they’d find it harder to update all those ads when a new directive came in from the company’s branding team, or when new promotions were launched.

Regardless of the reason, Nicky’s on to something when she says that creating ads, let alone “great” ads is not usually top of mind for advertisers. Yet when I asked PPC pros during a recent #ppcchat on Twitter what they were least likely to trust to automation, they said it was creating ads. So it’s a task the humans often skip, and they’re not willing to let the machines help them. Quite the conundrum.

The bottom line on writing better ads

Google knows humans are too busy to write great ads at scale. Yet humans don’t believe ML can do that job for them. What we’ll see are more hybrid solutions where the machine provides suggestions and makes it easy for the human to edit and deploy them at scale. RSAs are another good example: the humans provide the machine with relevant options to choose from but the engine’s ML has the freedom to combine those human-suggested elements in whatever way it believes will create the most relevant experience for the user.

Don’t ask ‘if’ automation will disrupt your business, but rather ‘when’

This was said by Todd Rowe, global managing director for Google Marketing Solutions. That same sentiment was expressed by Ginny Marvin during her keynote at SMX Advanced in June. The reality is that ML gets better as it gets access to more data and as computing power continues to rise.

Todd believes there’s about a two-year time frame before new technology, like automation in PPC, will be disruptive. That means advertising professionals have roughly two years to figure out how they will work with a new technology. If they wait longer, that new technology may cost them their livelihoods. Dire, right?

Here’s the thing though… we don’t have to be the victims of automation. We can use it to build better agencies and stronger PPC teams.

Thinking about the impact of automation on PPC has continued to evolve as has my own thinking because part of what PPC pros need to do is create their own automation.

Todd makes a similar point and says that agencies need to think of how to automate their agency process.

The ad engines build incredibly powerful automations using the latest in machine learning. Most advertisers simply can’t compete and build a better automation, so rather than compete, they should determine how to complement the technology. I think the answer is “automation layering.”’

In one example of automation layering, the engine handles bidding using target CPA Smart Bidding and the advertiser layers on their own automations, even simple ones like automated rules and alerts that let them know when Smart Bidding is starting to fail due to some unexpected factors affecting conversion rates, like a flash sale or an outage affecting conversion tracking.

The bottom line on PPC in an automated world

Automation is here to stay and the PPC pro’s role will change in the next two to five years. Even some of the most successful practitioners are delivering great results with simple automations of their own because for every simple but time-consuming task they automate, they gain time to experiment with all the new stuff Google keeps announcing and they get to the head of the pack and become the sought-after thought leaders in PPC.


I learned a tremendous amount at Google Marketing Live and only wish I’d had more time to attend more sessions so I could have shared more in this post. Tools and features aside, the biggest trends we heard about at the event are about privacy, machine learning and how humans fit into this ever-evolving picture.

The post Heard from Google, the biggest trends impacting PPC are privacy and automation appeared first on Search Engine Land.

Automation layering: How PPC pros retain control when automation takes over /automation-layering-how-ppc-pros-retain-control-when-automation-takes-over-318592 Fri, 21 Jun 2019 15:06:36 +0000 /?p=318592 By replacing the manual work done by the PPC expert with an automation that follows their logic, PPC teams can still have more control over automations created by the ad engines.

The post Automation layering: How PPC pros retain control when automation takes over appeared first on Search Engine Land.

The PPC track at SMX Advanced kicked off with a keynote by Ginny Marvin where she considered the future of an industry where automation is increasingly doing more of the work humans used to do. Her message was that we can’t escape automation so we must find a way to coexist with the machines.

The topic of automation in PPC comes up a lot but I suspect that when our industry talks about the impact of automation, what is considered are mostly automations built by the likes of Google and Microsoft… disruptive (but not necessarily bad) capabilities like Smart Bidding, close variant keywords, responsive search ads, etc.

But nobody ever said that advertisers can’t be disruptors too. They too can build automations to change the game and give themselves a competitive edge.

Having to build your own automations may sound daunting but remember that they don’t have to be cutting-edge like machine learning in order to be useful. In this post, I’ll explain an easy way to get started with your own automations using the principle of “automation layering.”

Automations from the engines are better with human help

In my new book, Digital Marketing in an AI World, I explain that humans plus machines usually perform better than machines alone. This is not a new concept and one most of you have probably come across in some form or other. One specific example I used to share in presentations came from Wired in 2014 and said that, “The National Weather Service employs meteorologists who, understanding the dynamics of weather systems, can improve forecasts by as much as 25 percent compared with computers alone.”

Because of the potential for better results, PPC pros want to remain involved. They have knowledge about the business that could meaningfully impact results. Sometimes there simply is not enough data for a machine learning system to come up with the same insight. So it’s generally agreed upon that humans + machines can outperform machines alone.

Generally, we tend to translate this concept into the PPC world by saying that account managers need to work together with automations from the engines.

When humans work together with automations from
the ad engines like Google, the results are generally thought to be better than if the automation didn’t have the help of a smart PPC account manager.

Automations from the engines are better with automations from advertisers

Then I started thinking about the role human PPC managers need to play for the premise to be true that humans + machines outperform machines alone. I realized that the humans in that equation could actually be replaced by machines as well, but in this case, machines that are controlled by the PPC pro and not the ad engine. PPC pros could benefit from the control (since they define the automation) and the time savings (because they don’t need to exert control manually).

So we should try to replace some forms of human control with new layers of automation and see if that delivers the same benefits as humans + machines. If we can write down the steps we take, we can teach a machine to do those steps for us. And it can be a simple rule-based approach which is much simpler to create than something based on machine learning.

Humans don’t need to do repetitive manual work to help the automations from the engines. They can teach their own machines to automate their process.

The concept behind automation layering is not a new idea. In engineering, solutions can be broken down into systems that can themselves be connected to other systems. Each system accepts inputs and returns outputs and so long as there is agreement over the format of inputs and outputs, many systems can be strung together and work seamlessly together to solve more complex problems.

Likewise, an automation could interact with other automations. In PPC, let’s call this principle “automation layering.”  This is an important concept because it’s the next evolution of what PPC pros have been doing for years: using their own insights to control what Google does. But just like Google is getting ever-more automated, our control over it should also become more automated.

By replacing the manual work done by the PPC expert with an automation
that follows their logic, PPC teams can still reap the benefits of having more control over automations created by the ad engines.

Let’s look at why automation layering makes sense in PPC.

Escaping automation is not an option

The reason humans worry about automations created by the engines is that we can’t escape these. They are launched at the engine’s discretion and whether we like it or not, we have to spend time figuring out how they impact our work. Given how busy the typical PPC manager is, this extra work is not something to look forward to.

Despite promising great things, the truth is that success with new automations depends on experimentation and reskilling, both tasks that require time to do well. To take an example from aviation, cutting corners with reskilling when new automations are launched can lead to disastrous results as seen with the 737-Max. Luckily in PPC the stakes are not as high, but I believe the analogy is relevant.

Automation layering for close variants

Some new automations cannot be turned off so they force us to change how we work with Google Ads. Close variants are a recent example of this type of change. In September of last year, they redefined what different keyword match types, like “exact match” mean.

Some account managers now spend extra time monitoring search terms triggered for exact match keywords. This would be a great form of human control to turn into automation layering where the PPC manager turns their structured logic for how they check close variants into an automation that does it automatically.

There are two specific ways I’ve shared to layer an automation on top of Google’s exact match keywords to keep control when they expand to close variants with similar meaning.

The first way is to simply check the performance of the close variant to that of the underlying exact keyword. If a user-defined threshold for performance is met, it can automatically be added as a new keyword with its own bid, or as a negative keyword if the performance is significantly lower. Note that close variants when used in conjunction with Smart Bidding should already get the appropriate bid to meet CPA or ROAS targets, but regardless it can’t hurt to add your own layer of automation to confirm this.

The second way is to use the Levenshtein distance calculation to find how far the close variant is from the exact keyword. It is a simple calculation that adds up the number of text changes required to go from one word to another. Every character added, deleted, or changed adds one point. Hence going from the correct spelling of my company name “Optmyzr” to the common typo “Optmyzer” has a Levenshtein distance of 1 (for the addition of the letter “e”). Going from the word “campsite” to “campground” on the other hand has a score of 6 because 4 letters need to be changed and 2 need to be added.

Layer your own automation on top of close variants to determine how different the close variant is to the exact match keyword. The Levenshtein distance function can be used to calculate the number of text changes required to go from one text string to another.

With a Google Ads script, we could write our own automation that turns these manual checks into fully automated ones. Because it’s an automation that we can define, it’s as powerful as the more manual human control that we used to have to put in to get the benefits normally associated with humans + machines.

Automation layering for Smart Bidding

Other automations like Smart Bidding are optional but with their pace of improvements, it’s just a matter of time before even the most ardent fans of doing PPC manually simply won’t be able to make enough of a difference that they can charge a living wage for their manual bid management services.

The machines are simply better at doing the math that predicts future conversions and using this expected conversion rate to turn an advertiser’s business goals around CPA or ROAS into a CPC bid that the ad auction can use to rank the ad against all others.

That said, remember that Smart Bidding is not the same as automated bidding. Part of the bid management process is automated, but there’s still work for humans to do. Things like setting goals and ensuring measurement is working are just two examples of these tasks.

Smart bidding doesn’t mean the entire bid management process is automated. Account managers still need to control dials for seasonality, conversion types, and fluctuating margins. These well-defined processes are great things to automate so they can be layered on Google’s Smart Bidding automation.

Besides needing to dial in adjustments for seasonality, special promotions and figuring out how to connect these limited controls to business goals like acquiring new customers, driving store visits or driving higher repeat sales, there’s still the point that most companies care about profits. Despite what we may think after hearing of Uber’s $1 billion quarterly loss, the reality is that most companies don’t have hordes of cash from VCs and a recent IPO so profits are what helps these businesses grow. Curiously, Google Ads doesn’t really have a Smart Bidding strategy geared towards profits.

So it’s up to the human PPC pro to bridge that gap and perhaps add some automation layering. One way to drive towards profitable PPC is to take margins into account when setting ROAS goals.

More profitable items (the ones with higher margins) can have lower ROAS targets. Remember ROAS in Google is “conv value/cost” (i.e., conversion value divided by ad costs). Assuming the conversion value is the cart value of the sale, for an item with a better margin more of that cart value is the product markup. So a lower ROAS can still deliver a profit whereas for items with low margins, less of the cart value is the markup and hence a higher ROAS is needed to break even.

PPC pros could manually assign different products to different smart shopping campaigns with different ROAS targets but that would be tedious and time consuming, especially if the margins for existing products were to change due to promotions and sales events. A smarter solution would be to apply automation layering and use a tool or script that sends products automatically to the right smart shopping campaigns where Google’s automations could take over.


The engines are automating many things we used to have lots of control over because we used to do them manually: from finding new keywords, to setting better bids, to writing ads. But when the people behind the businesses that advertise on Google get a say, results can be better than if the engine’s automation runs entirely on its own.

Just like Google is adding automations, so should you. Use the concept of Automation Layering to your advantage to retain the level of control you’re used to while also saving time by letting the machines do your work.

The post Automation layering: How PPC pros retain control when automation takes over appeared first on Search Engine Land.

What clients expect from their PPC agency /what-clients-expect-from-their-ppc-agency-317447 Fri, 24 May 2019 18:01:14 +0000 /?p=317447 Keep clients happy with in-person meetings and short, frequent check-ins. It is as critical to a healthy relationship as delivering the numbers they want.

The post What clients expect from their PPC agency appeared first on Search Engine Land.

I recently attended a panel at HeroConf where three advertisers working with PPC agencies shared what would make them happy with the relationship. In this post, I’ll share what stood out along with some ideas for how to ensure your agency can meet similar expectations.

And even if you’re not an agency or consultant, these points apply to any situation where you’re working with a client or stakeholder.

Biggest mistakes agencies make with new clients

Mistake 1: The sales team over-promises

The initial stages of a project set the tone that defines the relationship, so it’s critical to meet or exceed expectations at the start. One of the best ways to put yourself in a position to achieve early success is to set reasonable expectations. Seems obvious, but it often doesn’t happen because there is a disconnect between the agency sales team and the account management team.

For example, one panelist mentioned it’s common to get the A-team during the sales process but as soon as the signatures on the contract have dried, get handed off to a B-minus team that’s supposed to deliver on all the lofty expectations that were used to close the deal.

Mistake 2: Not planning time for on-boarding

Another panelist mentioned that hiring a new agency is very similar to hiring a new employee. The success of a new hire depends a lot on the level of effort that is invested in their initial ramp-up. Perhaps even more so than with new employees, a new agency needs help from the client to understand their business deeply enough so that they can become a strategic partner and give useful advice. Since they don’t work at the client’s office, they won’t absorb institutional knowledge the way new employees will so there must be a plan for it.

As an agency, this means you should plan for a ramp-up period in agreements with clients and set the expectation that you will need their help to learn their business. For example, make it clear that the first couple of meetings may have to be driven by the client to teach the agency about their business. Only later will the agency be able to take ownership and start producing results.

Mistake 3: Starting work in the middle of a critical period

Because there is some time needed to ramp up, one of the panelists said that the timing of the start of a new engagement is critical. An e-commerce company, for example, might not want to start work with a new agency in Q4 when holiday sales can make or break their annual results. An agency needs to build in a buffer for onboarding and ensure that they have an adequate runway to make necessary changes to the account before the client’s busiest time of the year starts.

How agencies can build trust with clients

Meet in person at least once

A common theme amongst the panelists was the desire of a client to build a personal connection with their agency team. It may not be possible for agencies to meet their clients in person all the time, but having some initial in-person meetings certainly breaks the ice and opens up a more direct and more honest and open line of communication.

In my last two jobs I’ve seen the power of personal connections first hand.

I represented Google as a public face of AdWords. When I represented Google Ads year after year at the same events, I got the sense that advertisers liked being able to share feedback with me, a person they could see, rather than over email to a sometimes faceless support team.

My job was to relay the feedback to the engineers and product managers and that same feedback would have gotten there if it was emailed to the support team, but people felt better sharing it with a person they knew. I think part of it was that people knew I’d have to be back at the same conference the next year and they could grill me if Google’s solutions hadn’t properly addressed their most burning needs.

At Optmyzr, we use technology that’s easy enough for marketers to use on their own. That means we never get to meet most of our customers in person. But when we do meet customers face to face at industry events, they simply seem more willing to send us their product feedback and are more receptive to getting proactive advice from our team when we send an email about a new capability of the product. It makes sense… when you know the people behind a company, the connection is more personal, the level of trust increases, and the customer treats the company more like another person than as a corporation.

Send executive reports in addition to what the client wants

Another point raised by a panelist was the need for CMO-level communications. Agencies frequently work with in-house teams who report up to the CMO, and the CMO controls the budgets for marketing investments, including the agency fees. So while the CMO has no day-to-day involvement with the agency, there is a lot the agency can do to help their counterparts with the client.

For example, providing a CMO or executive overview report is helpful. Sharing stats and charts in a format that can be directly integrated into presentations for the CMO is also helpful. The more the agency helps their client with the work of justifying the relationship, the less work the client has to do defending the agency and that generally reduces churn.

Make your agency stand out

PPC expertise makes agencies stand out

One expectation is that agencies will be on the cutting edge of PPC and know exactly what new things to try. This equates to the role of the PPC expert as a PPC doctor, something I’ve covered in my previous posts on how to keep your role relevant as artificial intelligence takes over more aspects of our jobs.

Clients hire agencies who are experts and who know the solutions that can fix their problems. You’re like a doctor in that you need to know the latest courses of treatments and how those may interact with other things your patient/client has going on.

Be willing to try something before it can be automated

Some new things that your client should try may be so new that there aren’t yet any good tools to scale the solution quickly. That means doing this sort of project may end up being more time consuming than the average project your agency takes on. But there must be a willingness to do the manual work if it means that you’re giving your client an edge. Of course, the client should understand that their agency fees will determine how much of this manual work they can get. If the agency isn’t fairly compensated, they may not work very hard at trying to retain the client.

Work with the latest technology

The ability to profitably deliver campaigns that take advantage of the latest capabilities of the PPC platforms depends on how good an agency’s technology is. Scripts are great for proto-typing solutions. If you still haven’t tried scripts, I highly recommend adding this capability to your agency. If you need some ideas, myself and fellow contributor, Daniel Gilbert have both written extensively on this topic.

The 90-day plan for agency success

As mentioned before, exceeding expectations at the beginning of a new agency-client relationship is critical. So what should an agency focus on for the first 90 days when working with a new client?

Know why the client sought help from an agency

Agencies need to understand why they were hired and use that knowledge to deliver a great first quarter for their clients. Remember that the client went to an agency because something they were doing before wasn’t working. It could be they were unhappy because they weren’t hitting their KPIs, or because they felt their last agency wasn’t delivering enough value, or because they’re experiencing a lot of growth and need extra manpower to keep up with the demands of a growing business. Whatever the reason may be, make sure you know it and make it a priority to address that concern before all else.

It’s easy to do this wrong because during the sales process, you may convince yourself the client hired you because of one of the amazing capabilities you sold them on. And while that may indeed have swayed their decision, it may not be the core thing that will make them happy in the end.

Numbers aren’t everything

PPC pros are so used to measuring everything that it can sometimes be surprising to them when they deliver on the KPIs but their client is still unhappy. Results are just part of what makes a client happy; they also need to be made to feel good about the relationship.

Starbucks just changed their loyalty program and instead of 125 stars to earn a free drink it now takes 150 stars. But coffee-lovers can now also earn more frequent but smaller rewards like an extra shot of espresso, a flavor shot or other drink customizations at lower levels of stars. The idea is that people are happier with more frequent rewards, even if they are smaller and ultimately more expensive. From a logical point, it makes no sense, yet we have to remember that we’re dealing with humans and logic isn’t everything. As an agency, never forget that keeping your client happy with in-person meetings and frequent but short check-ins may be just as critical to a healthy relationship as delivering the numbers they want.


Clients have many options when looking for help with their PPC accounts. They can build bigger in-house teams or hire one of thousands of agencies. Hopefully knowing a bit more about what goes on in the minds of advertisers who went the route of hiring an agency will help you be more successful in landing new clients and keeping your existing ones happy.

The post What clients expect from their PPC agency appeared first on Search Engine Land.

Where Google got more inventory to show Responsive Search Ads ads may surprise you /where-google-got-more-inventory-to-show-responsive-search-ads-ads-may-surprise-you-313970 Thu, 14 Mar 2019 16:14:26 +0000 /?p=313970 RSAs use machine learning to match the best ad variation with each search using data from ad performance segmented by query. To get this data, you need to delve into the scripts and Ads API reports.

The post Where Google got more inventory to show Responsive Search Ads ads may surprise you appeared first on Search Engine Land.

Advertisers know that to be effective with Google search ads they must be able to stand out in a competitive environment, and that often means having ads that qualify to be shown in the coveted positions above the search results. And since PPC is hardly a secret anymore, there are always more advertisers willing to compete for those few highly desirable ad slots. So when we hear that we can show our ads on new inventory when using a new Google Ads feature, like Responsive Search Ads (RSAs), that gets our attention. So where exactly is Google getting this new inventory that they say Responsive Search Ads (RSA) ads may qualify for? And what is the right way to evaluate the performance of this new inventory? Let’s take a look.

Ad tests need to limit the variables

When we do A/B ad tests through our tools in Optmyzr (my company), we firmly believe that we should compare apples to apples. In a perfect world, we would be able to replicate the exact same conditions for a search multiple times and show a different ad variant every time to get data about which ad drives the best results, whether that result is the best CTR, conversion rate, or conversions per impression (a combination of the previous 2 metrics).

But we don’t live in a perfect world so we have to sacrifice a bit of precision and try to limit the variables of our tests as much as possible. In Google Ads, that is really, really difficult because there are a lot of factors that change and that we can’t always control for. In some cases, the best we can do is to compare similar ad formats (e.g., expanded text ads) within one ad group where it is targeting the same audience. While that may sound like an apples-to-apples comparison, it’s often not because the ad group has different keywords, that match to even more different queries, and the ads are shown to entirely different people.

In some cases, we will compare different ad formats against one another but that only makes sense if those ads were competing for the same ad slot. For example, it’s a fair test when an ad slot on the display network could have accepted either a display ad or a text ad. But it’s not a fair test when some of that inventory could only have shown text ads and others only display ads. That’s an extra variable that muddles the results.

RSAs should be evaluated on incrementality

With RSAs, a simple comparison of the performance of the RSA to the ETAs in the same ad group is a flawed test because Google says the RSAs have access to inventory where the ETA could not have appeared. Google’s Matt Lawson says “There are all sorts of instances where you might end up serving impressions in a low CTR placement that you would never have qualified for before.” He’s not talking purely about Google showing ads in places that didn’t show ads before the arrival of RSAs. He’s also talking about showing ads on searches where your static expanded text ads (ETAs) may previously have had too low an Ad Rank to garner an impression. I’ve spoken with other Googlers to confirm that at least some of this new inventory is not new at all. It’s inventory that’s always been there, but where we may not have had good enough ads to make it available to us.

So given that the inventory is sometimes different, what we need to measure is whether the incremental volume driven by the RSA justifies keeping those ads active. Andy Taylor makes the point that only Google can measure the incrementality of new ad formats like RSAs in his post describing why click-through and conversion rates matter less than before. He says this is the case “since there’s no way for advertisers to quantify ad placements that are off limits to ETAs for whatever reason.” But if we can define some of these new placements, I would argue that we can measure the performance.

How to find the new inventory RSAs gave access to

The new placements are search terms for which you previously didn’t qualify. Which ones exactly? Run a search terms report from right before and right after you enabled RSAs in an ad group. Finding the date when you first started RSAs is easy enough with the Change History.

Use the Change History in Google Ads to find the date when you first added responsive search ads to an ad group. Screenshot from Google.

Then see if there are entirely new queries for which there were no impressions before and some impressions after. While there certainly can be other reasons for these queries to have started showing your ads, like bid changes, algorithm changes by Google and changes in the competitive landscape, the addition of an RSA ad unit is also a plausible reason for the new inventory opening up to you.

Use the date range comparison on the Search Terms tab to find queries that only started showing impressions after responsive search ads were added to an ad group. Screenshot from Google.

Finding new inventory long after RSAs started

RSAs use machine learning to match the best ad variation with each search. As the system learns, it may turn off inventory that performs poorly and turns on inventory that appears promising. So it’s worthwhile to keep an eye on what this new inventory is on an ongoing basis. You can get this data by looking at ad performance segmented by query. Surprised at this recommendation because you’ve never seen a search term segment on the ads performance page in the Google Ads front-end (AWFE)? You’re right, that data is not there, but if you look beyond what’s in the Google Ads interface and delve into scripts and Ads API reports, it’s available.

As I’ve covered in previous posts, there are over 40 Google Ads reports available through the API and they’re chock full of details you simply won’t see in AWFE (AdWords Front End).

First, download an Ad Performance Report and include the Id and the AdType.

The Ad Performance Report can be downloaded using scripts or the API. Screenshot from Google.

Then download the Search Query Performance Report and include the Query, CreativeId, and all the metrics you want to check, like Impressions, Clicks, Conversions, etc.

The Search Query Performance report contains a reference to the CreativeId which can be used to connect search term performance with the ad that was shown for a particular search term. Screenshot from Google.

Finally, use spreadsheets to do a vlookup between the two sets of data so that next to each unique combination of a query for an ad, you see if the ad was an ETA or RSA.

Use the vlookup function in spreadsheets to connect the ad data with the search term data. Screenshot from Google.

Then sort the resulting sheet by query and you’ll start to see when a particular query has shown with both RSA and ETA ads. You’ll also see queries that have shown ONLY for RSA ads.

Rows highlighted in blue show search terms that only showed with RSAs, new inventory that should be judged on its incremental nature. Rows in green showed ads with multiple ad formats and can be compared in a more traditional A/B test. Screenshot from Google.

With these two techniques, you’ll start to get a sense of the incrementality Google is referring to.

Why didn’t my old ads qualify for these search terms?

So why do RSAs open up new search term inventory in the first place? Remember Google Ads is an auction where the order of the ads is determined by Ad Rank which is a function of the CPC bid, Quality Score, and other factors that impact CTR like the use of additional features like ad extensions.

Every time a search happens, a new auction is run. While an advertiser’s CPC bid may not change from one auction to the next, the QS can change dramatically based on Google’s machine learning Quality Score system whose job it is to predict the likelihood of an ad getting a click (driving CTR). That likelihood is significantly impacted by how the query relates to the ad. And when the QS system is limited to a handful of static ETA ads, it may not be able to pick an one that is good enough to have the QS necessary to get the ad to show on the page. But when the ad group contains an RSA, the system can try to find a combination of components that will have a high QS for that particular search. And when it succeeds at that, the ad is all of a sudden eligible to participate in more auctions, hence getting access to new inventory. So it’s not so much that Google has unlocked some new inventory that previously didn’t exist. Instead, machine learning has helped figure out a way to create an ad that is relevant enough to access inventory that’s always been there.

Machine learning needs a teacher; it needs you!

Now some advertisers say that RSAs don’t perform as well as ETAs. As Andy, Matt, and I have already pointed out, that may be a finding based on incomplete information because it may ignore the fact that the different formats trigger ads for entirely different queries. But what if you’ve accounted for that and they do perform worse? That sounds like an optimization opportunity rather than a reason to pause RSAs.

Help the machine learn to do better and don’t just turn off the feature. That may not happen though because humans are funny about how they treat technology. Automotive researchers found that people tend to be quite forgiving of mistakes made by other humans who are learning something. If your 16-year old drives poorly, you give advice and trust that they will learn from it and get better with experience. When a self-driving car, on the other hand, makes the same mistake as your 16-year old, humans tend to chalk it up to bad software and turn off the self-driving feature and instead continue to drive manually.

Many of the exciting automation features we’re seeing in PPC these days is driven by machine learning and as the name implies, there is some learning that needs to happen before results may get good enough. Learning is part of the name of the technology driving this. How quickly a learning system can become good depends on having a good teacher. And even with the best teacher, an algorithm needs time to first train itself and later update what it’s learned as it starts to see the real-life results of its decisions.

So with RSAs, they can only be as good as the ad components we’ve provided. Google has guidelines for what constitute good components and I’ve provided a few scripts to mine for good ad components and n-grams in ads in your historical ads data. Once the ad unit has great text to experiment with, give it some time to do the experiments.

Just like broad match will eventually stop showing ads for related queries that fail to turn impressions into clicks, so will RSAs. Google makes no money from impressions; they make money from clicks. And Google is pretty strict about not showing things that seem irrelevant, i.e., RSA variations that never get clicked because these are a waste of space and a detriment to Google’s position as the provider of useful answers.


It’s easy to get carried away with anecdotes heard from other advertisers and decide that maybe RSAs don’t work all that well. I hope that the more you understand how to properly measure them, and the technology that will improve performance, the better you will be in a position to make an informed decision about where RSAs fit into your strategy for managing successful PPC accounts in a world that is ever more driven by automation. And if you’d like to learn more about the role of humans in digital marketing in an AI world, check out my first book.

The post Where Google got more inventory to show Responsive Search Ads ads may surprise you appeared first on Search Engine Land.

Google’s next chapter for metrics to focus on clarity once ‘average position’ is removed /googles-next-chapter-for-metrics-to-focus-on-clarity-once-average-position-is-removed-312947 Tue, 26 Feb 2019 16:05:06 +0000 /?p=312947 Fred Vallaeys explains why advertisers need to rethink bidding strategies and position metrics now that Google has announced it will sunset one of its oldest metrics later this year.

The post Google’s next chapter for metrics to focus on clarity once ‘average position’ is removed appeared first on Search Engine Land.

Average Position was one of the original metrics in Google Ads when they launched their search advertising product called AdWords. But as search advertising has evolved, what used to be a primary metric for making optimization decisions has lost its usefulness and so Google has announced that it will disappear later this year.

This means advertisers will need to rethink some dated bidding strategies, update reports they share with stakeholders and figure out how the new position metrics can replace what is being deprecated. But first, let me share my take on why this change is being made.

Why ‘average position’ is a poor metric to understand position

Historically the average position metric was useful because ads reliably showed up in consistent locations on the page. Knowing the average position of an ad meant you knew where your ad showed on a web page. Its physical “position” on the page correlated to the “average position” in reports.

For example, in the earliest days of AdWords, premium ads that were sold to big companies on a CPM basis were shown above the search results. Ads on the right side were reserved for smaller advertisers who paid on a CPC basis through what was then known as AdWords Select. So if you were an AdWords Select advertiser and your ad was reported as having an average position of 1, you understood it was the first ad on the right side of the SERP.

But then Google realized that the ads they were putting in premium locations on the page from advertisers paying on a CPM basis were making less money than the CPC ads on the right side. So they merged the two advertising programs and made all advertisers compete for all slots on the page based on Ad Rank, a metric comprised of the CPC bid and the CTR. Position still equated to a physical location on the page, except for the fact that Google made one more change in its effort to ensure only the most relevant ads would occupy the top of the page.

The ads with the highest rank would only be shown above the organic results if they met a certain relevance threshold. This was Google’s way of ensuring users would see only the most helpful ads above organic results. Now if your ad was reported as being in position 1, only one thing was certain – your ad was shown before all others (i.e., your ad was the winner of the auction). What was no longer certain is where it showed; it might have appeared at the top of the page, or on the right side if no ads met the top of page promotion threshold.

And this was just the start of the muddling of the meaning of “average position.” Google briefly started showing ads below the search results (and gave these slots to ads ranked above the ones that showed on the right side meaning that ads with lower average positions would be seen before ones with higher positions). Later ads disappeared from the right side, more ads started showing at the top of the page, thresholds and auction rules kept getting updated, new ad formats like shopping started using a different layout and new search syndication partners had their ad slot locations. While average position continued to reflect an ad’s rank compared to all others, it became less and less clear what that position actually equated to in terms of a location on the page.

In essence, “average position” should have been named “auction rank” to better reflect its meaning. The word ‘position’ refers to a relative position compared to other advertisers and has nothing to do with a physical position on the page where the ad is shown. Advertisers often care more about where their ad is shown rather than who they were beating in the auction so the average position metric became less meaningful and it’s no surprise it is being sunset by Google.

Top position metrics bring back clarity

Being the leader in online advertising is a double-edged sword for Google. They got to pick the metrics that we all care about but they’re also locked into supporting those metrics for the long haul or face lots of questions.

Google Ads got started in a world where little could be automated yet Google wanted to give lots of control to its advertisers. So they decided to create structures like ad groups, and share metrics like average position to let advertisers understand what was happening and give control to take action at the same time. A lot of that legacy is difficult to undo, even now that it may simply make more sense to let machine learning handle a lot of the details.

Fortunately, in this case, Google is only sunsetting a metric after they feel they’ve introduced newer metrics that better inform advertisers about what they primarily care about: that their ads are shown in places where they will drive more business. Google has introduced four new metrics:  “Impression (Absolute Top) %,” “Impression (Top) %,” “Search absolute top impression share” and “Search (Top) IS.”

These metrics tell advertisers two things: how often their ads are at the top of the page when they get an impression and what share of all the top of page impressions they’re getting.

Bid-to-position is not a good way to set bids

Advertisers have long used average position as an input to bid management strategies. Remember that until Google introduced automated bidding (e.g., target CPA and target ROAS bid strategies), advertisers had to set their own CPC bids. Many advertisers set their CPCs based on their expectation of how likely clicks were to convert, something they might measure with conversion tracking. But many advertisers without conversion tracking set bids by looking at the average position. Some simply wanted to have their ad always be the “top” ad, so they bid as much as needed to keep an average position of 1. Others argued that clicks in position 1 were too expensive and that they’d rather get fewer but cheaper clicks so they set bids in an attempt to stay at lower positions but still on the first page of results. This is where bid-to-position bid strategies originated.

Nowadays, automated bidding is so ubiquitous and cheap that bid-to-position strategies simply don’t make a lot of sense for the majority of advertisers. They’d do far better by implementing proper conversion tracking so that automated systems can set the right CPC bids for each auction to achieve the target CPA or target ROAS.

Brand advertisers can use the new position metrics instead of ‘Average Position’

One group of advertisers who rightfully care about position are brand advertisers. Even though Google Ads is at heart a direct response advertising platform, there are brand advertisers who want to go beyond the Display Network and Video Ads on YouTube for branding and who want to run brand ads on search. In these cases, bidding to the absolute top of the page is the right strategy. This strategy doesn’t work very well with just the average position metric because that metric only says if the ad is the top-ranked in the auction, but not if it passed all the other criteria needed to be shown above the organic results or at the absolute top of the page. Google’s four new metrics offer far better data to use for advertisers who care about branding.

What we lose with the end of ‘average position’

The vast majority of advertisers will be better off when average position no longer exists and they look at the newly introduced metrics instead. But at my company, Optmyzr, we’ve found there are still some scenarios where average position is helpful, especially when looking at segmented data.

For example, we have a bid optimization tool that recommends geo bid adjustments or validates that automated bidding systems are doing a decent job with geographic differences in performance. Our tool’s recommendations are generated by a machine learning algorithm that looks at many factors, including average position. Specifically, it uses this metric to predict if an increase in geographic bid adjustment is likely to increase volume for that location. After all, there’s no point raising a bid for a location where an advertiser already dominates the auction. And while average position is a metric that is available in a geo report, the new metrics are not. This means that we can no longer as reliably identify opportunities for geo-segmented data.

This specific example won’t cause issues for most advertisers but the point is that there are advanced use cases relying on the average position metric that will be hard to fix until the new metrics are more widely available across all of Google Ads.


There’s never a dull day working in PPC and the sunset of one of the oldest metrics around is another clear illustration of that. As we’ve seen in the past (like with the deprecation of mobile campaigns and the later re-introduction of -100 percent device bid adjustments), Google does respond to the needs of its advertisers so this is a great time to share constructive feedback about how this change will impact you.

While I worked on  Google Ads, I was involved in several updates related to Quality Score. I can tell you we cared a lot about what advertisers said because we couldn’t possibly know every use case. That’s the case here too so I for one really look forward to learning a lot more about how advertisers use average position in unique ways and what sort of workarounds they’ll come up with before it disappears forever.

The post Google’s next chapter for metrics to focus on clarity once ‘average position’ is removed appeared first on Search Engine Land.

Here’s a new script to find the best text for new ads using an n-gram analysis /heres-a-new-script-to-find-the-best-text-for-new-ads-using-an-n-gram-analysis-312279 Fri, 15 Feb 2019 13:00:48 +0000 /?p=312279 This script analyzes the performance of commonly found word sequences to deliver additional insights about historical ad performance.

The post Here’s a new script to find the best text for new ads using an n-gram analysis appeared first on Search Engine Land.

Ad text optimization is a great example of an area of PPC automation where the machines won’t get very far without the help of PPC pros. Machines need ongoing support from humans to deliver results. Automations can easily pick winning and losing ads through statistical analysis and do even more sophisticated things like predicting the best ad to show to individual users—  they do this based on patterns discerned by machine learning models. However without compelling pieces of ad creative to experiment with in the first place, the machines won’t get very far.

This is very different from other PPC automations like bid management where you have the option to set it up once and then mostly forget about it. To be clear, I think a set-it-and-forget-it approach to bidding is the wrong strategy but it is nevertheless possible. Ad management on the other hand can’t be automated to the same extent as bid management because the inputs that the machine requires are constantly changing and take a lot more time from the human PPC experts to set up and maintain.

Fortunately, even the manual process of crafting new messages can be helped with tools that give advertisers ready access to insights such as what elements of ad text have worked well previously. I recently shared a Google Ads script that deconstructs ads into their components — like headline and description — and reports aggregate metrics for commonly used phrases. This month I’m sharing a script that uses a different method to deliver additional insights about historical ad performance. The script shared at the end of this post is based on a popular script first shared by Daniel Gilbert and performs an N-Gram Ad Text Analysis.

What are n-grams and why are they useful in PPC

There’s much more to them, but for the purpose of this script, an n-gram is a word sequence where ‘N’ denotes the number of words in the sequence. A unigram is a single word, a bigram is two words, a trigram is three, and so on. In PPC, we can use n-grams to analyze the performance of commonly found word sequences. For example, we can find data for strings of words that appear frequently throughout many keywords or search terms. By aggregating data, we can more easily pick up on performance trends in accounts.

Because we can restrict an n-gram analysis to as many or as few words as we want, we can use it to go one level deeper than we did with my previous script that looked at the performance of entire headlines or description lines. That means we can find the specific calls to action, unique value propositions, or offers that are driving results for campaigns.

In other words, whereas last month’s script can tell us which is a great performing variant of headline 1, this script can look inside headline 1 to help us determine if a sequence of words like “$5 off $49” or “10 percent off” usually delivers a better result.

Scripts can be edited to work with your own strategy

If you’re a frequent reader of scripts-related posts on this site, you’re probably already familiar with n-grams. That’s because contributor Daniel Gilbert wrote an n-gram script that analyzes search queries and shared it a few years ago.

But that script only works with search terms and not ads. Perhaps deconstructing ads was less important before Responsive Search Ads came along, but now it seems particularly interesting to have this sort of analysis available as part of a PPC toolkit. So I decided to use this opportunity to illustrate a point I’ve long been making about the power of scripts: You can leverage the work done by others and with just a few tweaks get it to do exactly what you need. So I started with Daniel’s code and changed the inputs from queries to ad texts to gain an entirely new insight.

Split ad text components like headline 1 into n-grams to get insight into what variations of a call-to-action drive results.

If you’re curious about how difficult it was to make this change, the main modification I had to do was to change the AdWords Query Language (AWQL) statement to pull in data from the ad performance report instead of the search query performance report. Just with this change, the script would have been able to solve the basics of my use case. But to make it work better I added a few new user settings as well as some extra code to concatenate headlines and descriptions so that a piece of text that spans 2 lines would still be counted as an n-gram.

The script

You can copy-and-paste this code into your own Google Ads account (it only takes a few lines of modifications if you want to use the code in an MCC account rather than a child account).

The post Here’s a new script to find the best text for new ads using an n-gram analysis appeared first on Search Engine Land.

Get more conversions by deconstructing your PPC ads /get-more-conversions-by-deconstructing-your-ppc-ads-310642 Thu, 17 Jan 2019 13:05:53 +0000 /?p=310642 Columnist Frederick Vallaeys shares a new Google Ads script that will reveal what parts of your ads are worth keeping and which to discard.

The post Get more conversions by deconstructing your PPC ads appeared first on Search Engine Land.

Chefs have been playing with improving the presentation of food by deconstructing meals into their components before recombining them into dishes that are a better version of the ingredients’ former selves. Sounds almost exactly like what PPC experts are doing with ads these days. When creating a responsive search ad, we’re deconstructing our creatives into their core elements which Google can later recombine into must-click ads.

Fred Vallaeys

What’s particularly great about this process is that when we deconstruct our old ads, we can get rid of any of the parts that were less compelling and focus more on what users have liked in the past and what drove sales for advertisers. In this post I’ll share a Google Ads script you can use to deconstruct your own ads and learn what parts are worth keeping and which to discard.

Ad innovations reduce work, but not when it comes to ads

It’s clear the role of PPC experts is shifting in response to the types of automations being added to Google Ads. When it comes to bid management, the newest generation of automations, Smart Bidding, make this a less time consuming task. Advertisers no longer need to manage every keyword’s bid individually, but can instead focus on higher level ROAS and CPA goals and managing those as their business evolves. Check out my recent posts about why bid automation is a must, but what pitfalls to avoid if you’d like to know more about how bid management has changed over the past year.

I’ve long argued that despite the onslaught of PPC innovations, automations, and tools, there will always remain a need for human involvement if the goal is to produce the best possible PPC results.

For example, when it come to the creative side of online advertising, like writing ads, new capabilities in Google Ads actually require PPC experts to do more work than before if they want the best results. Because there are more audience and demographic targeting options, it no longer suffices to write one ad for everyone searching for a keyword. Now we have to come up with the right ad for many segments of similar people searching for that same keyword.

Let me quickly illustrate this… If you wanted to sell Adidas sneakers in the days of AdWords (before it was called Google Ads), you might buy the keyword “buy adidas sneakers.” If you were doing a good job, you’d always have two or three ads in an A/B test so that you could improve your conversions per impression (by having the best combination of CTR and conversion rate).

1 keyword —> 2 or 3 ads

But then Google Ads started growing up and remarketing came along. Now you might know that some users who searched for that same keyword “buy adidas sneakers” had spent a lot of time browsing women’s shoes. With this knowledge you could write more compelling ads, saying something about women’s sneakers to users on this remarketing list.

1 keyword * several remarketing lists —> 2 or 3 ads for each remarketing list

Now there are also demographics like age, gender, and household income that can be targeted. A lot of advertisers use this capability to set different bid adjustments or to exclude users in the wrong segment, but it can also be leveraged to create a more direct connection through a more targeted ad.

1 keyword * 3 genders * 7 income ranges * 7 age ranges —> 2 or 3 ads for 147 possible demographic permutations

Today, there are even more audience targeting possibilities including in-market, detailed demographics, and similar audiences. We have the opportunity to say something meaningful to all these different segments when they search for our keywords but it takes more work than before to determine the right thing to say and to set up all the ads with the requisite campaigns, ad groups, and ad variations.

Deconstructing ads into components reduces the need for thousands of variations

Google is addressing the exponential growth in the need for more ads with offerings like responsive search ads. Here a machine learning system can choose from a variety of different headlines and descriptions (Google calls these assets) and attempt to recombine those into the perfect ad for any given user. While these ads may not yet be ready to outperform manually created expanded text ads, they’re likely to see continuous improvements in performance. Remember, Google mostly makes money when people click on ads so they’ve got a pretty good incentive to make this work. And of course, thanks to Moore’s Law, machine learning capabilities are accelerating at a mind-blowing pace and a system that might be producing only so-so results today could deliver tremendous results in the very near future.

The ‘Ad Component Report” script

Both Optmyzr (my company) and Google have tools that suggest which ad components, or assets, to try in responsive search ads. But I’ve heard quite a few customers ask for updates to our popular script that generates an Ad Component Report. The concept of the script is simple. It finds all ads that use the same headline, description, URL or path and aggregates the metrics. With this report the advertiser can get better insight from across their whole account about the types of ad components that get a good CTR, a great conversion rate, or best yet, a combination of both.

The script produces a Google Sheet with several tabs, one for each individual ad text component like headlines, descriptions, paths, and visible URL. It also has a few tabs for combinations of elements. Note that Google Ads Scripts are currently running on API reporting version v201806 and until Google upgrades this to v201809 the script cannot read the fields for headline 3 and description 2.

Copy and Paste this script into an individual Google Ads account (not an MCC account):

Script settings

There are just a handful of settings you can modify for this script:

  • currentSetting.spreadsheetUrl = “NEW”; —> the URL of the Google spreadsheet to overwrite. Or enter “NEW” to create a new spreadsheet every time the script runs.
  • currentSetting.time = “LAST_30_DAYS”; —> the date range for the stats. This must be one of the standard scripts report date ranges, e.g. LAST_7_DAYS, LAST_MONTH, …
  • currentSetting.accountManagers = “”; —> the comma separated list of Google usernames that should have access to the spreadsheet with the report.
  • currentSetting.emailAddresses = “”; —> the comma separated list of email addresses to notify when the script has finished running
  • currentSetting.campaignNameIncludesIgnoreCase = “”; —> the text that must be part of the campaign names to include in the analysis. Leave this blank to include all campaigns.
  • var currencyCode = “$”; —> this currency symbol will be added to the reports in the header columns for cost related fields.

If you’ve never added a script to Google Ads, we have a primer here.

How to use the ad component report

Use the script to automatically regenerate the latest ad component report for your account. You can then filter the report to show only the best performers and copy those over into new responsive or static ads.

The script generates a Google Sheet with one tab for each ad component. If the same component, like a headline, is used in multiple ads, the reports shows the overall performance of that variation.


Showing the right ad to the right user is becoming easier thanks to new machine learning driven tools like responsive search ads. But it’s still up to human PPC experts to teach the machine what ad variations to try. And thanks to the many years of account history many of us have, we can leverage other automations, like Google Ads Scripts, to quickly sift through our data to find the best ads to feed into the machine.

The post Get more conversions by deconstructing your PPC ads appeared first on Search Engine Land.

PPC toolkit expands with Bing Ads Scripts’ cross-account operations /ppc-toolkit-expands-with-bing-ads-scripts-cross-account-operations-307867 Thu, 08 Nov 2018 19:19:48 +0000 /?p=307867 Columnist Frederick Vallaeys offers code samples for sending email and creating virtual labels with JSON to fill in some missing pieces.

The post PPC toolkit expands with Bing Ads Scripts’ cross-account operations appeared first on Search Engine Land.

Bing Ads app icon on mobile device

Bing Ads announced they now have scripts which can automate tedious and repetitive work just like advertisers have been doing with Google Ads Scripts. And it’s even possible to paste scripts from Google into Bing Ads where the system will automatically change the code as needed to work in Bing. But because the Bing capabilities are brand new and still in beta, there are a few things you should know before getting started.

Specifically, there is no support for email, labels or spreadsheets. So while we can automate some very useful stuff, there is no way to monitor the automation. And as I’ve written about before on this blog, monitoring automation is one of the key roles humans will play as automation in PPC becomes more widespread.

If we have to monitor our automation by manually looking at logs as the code is running, we won’t really save much time but luckily Bing has provided support for UrlFetch which means we can build the missing pieces of the puzzle and I’ve done that here for you. So take a look at the code samples for sending email and creating virtual labels with a JSON data store at the end of the post. There’s even a technique for sending the data to Google Sheets.

A unique Bing capability: Automate across accounts

Out of the gate, scripts for Bing work at the MCC and single account level. When working with an MCC script, you can execute a function in up to 50 accounts in parallel with the executeInParallel() function. Just like with Google Ads, this means you can run a common task in up to 50 accounts at once.

But one nifty thing that appears to be new and exclusive to Bing Ads is that you can switch back and forth between accounts using the MCC app and then do some work in the selected account. This is interesting because it makes cross-account operations much easier, something that should be of interest to larger advertisers who maintain multiple accounts for the same business.

Image showing an example of Bing Ads Script

For example, you could find the best performing ad headline in one account and then add that to new ads in another account. This is technically also possible with parallel execution but it’s just a bit simpler to code up in Bing. And given that there are many marketers with limited coding experience dabbling with scripts, anything that makes it easier to write code is a welcome improvement.

What Bing Ads scripts can do

Bing scripts are adding new capabilities several times a month but for now they still trail what you can do with Google’s Ad Scripts. In terms of supported ads-related entities, here’s what Bing currently covers:

  • Accounts
  • Campaigns
  • Ad Groups
  • Keywords
  • Ads
  • Bids
  • Budgets

With this functionality, you can get stats for the most common entities and make automated optimization decisions like what should be paused because it’s performing poorly. You can change bids based on promotions, or you could shift budgets between campaigns based on performance.

Budget scripts are very popular based on my experience with my company, Optmyzr, and Bing Ads Scripts can handle some of the most frequent use cases out of the box. For instance, evaluating budgets in ways that closer reflects how typical businesses think about budgets, i.e., monthly account level budgets rather than campaign daily budgets.

There are some notable things that are not yet supported by Bing Ads scripts:

  • Search Terms
  • Reports
  • Ad Extensions
  • Labels
  • Display
  • Shopping

While it’d be great to get support for extensions, different campaign types, and search terms management, we can still build many other useful scripts while we wait for these. However, not having labels and reports is a bit more limiting because these capabilities have been instrumental in writing better scripts, even for the entities that are supported.

Working without reports

So what do these limitations mean in terms of what you can expect to do today? Because we can’t use reports, we are limited to just the stats that are supported through the stats object and while that includes the most important ones like clicks, impressions, cost, conversions, etc., it’s not the full list of everything. Here is a list of the stats currently supported:

Many scripts I’ve written rely on reports to pull stats because it’s faster and works better for very large accounts. Because Bing Ads Scripts have limits similar to those of Google and can only pull 50,000 entities in iterators and 250,000 entities per script execution, analyzing an account with 250,001 keywords won’t work for now unless you do some serious workarounds that store temporary data after each script execution.

Working without labels

Because labels are not supported, it’s harder to build workflows because there is no easy way to mark what things have already been processed by a previous step of a multi-step process. For example, in Google Ads, we add a label to any campaigns that we pause with our Flexible Budget script. The label helps the script know what campaigns to re-enable at the start of the next month. With no label support, we have to find other ways to store information about what was done. A potential workaround is to store data in JSON format and write it to a file. At the start of each script execution, it should read in the JSON from the file and use it to decide what should be done by the current script run.

Working without email and spreadsheets

Almost every script I’ve written emails someone when it finds something of interest, whether a broken URL, or a link to the analysis. Because Bing Ads Scripts don’t yet support sending email, we need to rely on UrlFetchApp and some third-party APIs to be able to send an email. You could use a SendGrid account (they have free accounts for low usage) in combination with UrlFetchApp to send emails.

Many reporting scripts rely on Google Sheets to store the analysis in a format the user can easily view or use for further analysis. Unfortunately, there is no support for spreadsheets (or Excel) in Bing Ads Scripts at the moment. Again, UrlFetchApp comes to the rescue as it can be used to read and write files. See an example of how OneDrive can be used to read and write CSV files.

Code snippets to handle current limitations

Sending Emails from Bing Ads Scripts

This code can be used to send an email using your SendGrid account. Just be sure to update the code with your own API key.

Embed code:

Using JSON Between Script Executions

Use this code to store data about what a script did and then use it the next time the script runs to pick up where you last left off. This can help with limits on how many entities you can process in a single script run. To make this work, we’re using a free service that lets you create, edit, and read public JSON snippets.

The easiest way to use this is to manually create a JSON file and then enter the resulting filename in the script so that it can overwrite the file with the latest data every time it runs. To do this, first go to and enter some random JSON like {“key”:”Value”} and hit ‘Save’. Then grab the resulting URL and put it on line 3 of the sample code.

Sample code for writing JSON

Embed code:

Sample code for reading JSON

Now that we’ve written the JSON to a data store, we’ll want to pick it up at the start of the next run of the script. That requires slightly different code and you can grab it here. Just be sure to use the same URI that you used to save the JSON in the previous script. And since this is a public store, consider checking the integrity of the data before you use it, just in case someone else accidentally overwrote your file.

Embed code:

Storing Results in Google Sheets

The holy grail of saving data from scripts is with spreadsheets and since Google launched ads scripts first, most of us are used to using Google Sheets for this. It turns out that you can send Bing Ads data to a Google sheet but it takes a bit of work as explained here.


I couldn’t be more excited about getting a new tool in my PPC toolkit to help streamline optimizations. As I explore Bing Ads Scripts in more depth, I’ll continue to share learnings here. I hope you’ll reach out to me to share your own experiences so that we can create solutions together and help the PPC community be more effective.

For more, check out 3 differences between Bing Ads Scripts and Google Scripts you need to know.

The post PPC toolkit expands with Bing Ads Scripts’ cross-account operations appeared first on Search Engine Land.