Aaron Levy – Search Engine Land News On Search Engines, Search Engine Optimization (SEO) & Search Engine Marketing (SEM) Fri, 10 Apr 2020 12:21:44 +0000 en-US hourly 1 https://wordpress.org/?v=5.5.3 Can we trust automation in a time of crisis? /can-we-trust-automation-in-a-time-of-crisis-332132 Fri, 03 Apr 2020 12:00:00 +0000 /?p=332132 Review how your automation performed during the last volatile period to make sure your tool is making the right decisions during this one.

The post Can we trust automation in a time of crisis? appeared first on Search Engine Land.

]]>
It’s a challenging time to be a marketer. Consumers are distracted, financial futures uncertain and volatility all but a guarantee. It’s a time when self awareness, realism and tact are crucial for success. We need to move fast, but controlled. Tomorrow is going to be different from yesterday, and two days from now different from today.

Can we trust automation in a crisis? When everything is different depending on the hour, can we trust machine learning? Will our automation move fast enough and provide the insights we need? Should we revert to the good ol’ days of staring at the screen for 24 hours a day, watching metrics change in real time?

Yes, automation can keep up with appropriate inputs. No, you should not revert to the stone age of SEM (even though my beard indicates otherwise). Each situation is different, so you must evaluate your own business under your own lens. Below are eight core elements to watch and adjust to ensure success for your search campaigns in a time of crisis extreme volatility.

Bid modifiers must adjust along with behavior

First things first, your time of day bid modifiers are (likely) broken. Or rather, they need a reassessment. COVID-19 has thrust much of the world into hibernation, working from home with a disrupted family life to boot.

Normal demand curves don’t look normal anymore. I took a look through one of our “normal” clients to see how search volume changed pre and post crisis.

Weekends are adjusting, the lunch break and the “getting to work” spikes have flattened. We remember volume taking a nosedive Friday + Saturday nights. We saw a slight rise Sunday evening as people got back to their computers. The current state is… well, different. I suspect it will be different for your accounts as well.

Odds are your curves have changed – you should check to see if your assumptions hold true.

We can expect geographic volume to shift dramatically as well. People aren’t commuting anymore, so we can expect a change in searcher behavior around cities. Crises (especially COVID) affect different cities, states and regions in different ways. As regulations around social distancing have shifted, so too can you expect regional search behavior to shift. Double-check your geographic bid modifiers and budget allocations – does performance pre-crisis still hold true in crisis?

I won’t go in depth to the shifts in device performance (as many already have at length), but device behavior shifts dramatically in a crisis as well. Particularly in times of COVID, if consumers are immobile there’s less of a need for a mobile device. Adjust accordingly. 

Inventory and news cycle will likely influence each other

Humans are inherently irrational creatures. We make decisions based on what we observe and react to what we see, rather than trusting a clear course of action. In times of crisis, it’s amplified a kabillion-fold. As such, inventory will often dictate demand and vice versa. If a news report shows is about a given product (all but guaranteed during a crisis), odds are the masses will take to search engines to hunt them down.

This can yield false positives and false negatives, impacting both long and short term performance. As the mad dash arrives for a given product, you’ll see a huge spike in conversion rate (while inventory remains). Following the rush will be a near immediate cooldown. Ensure your inventory tracking is tip top and tied to bid modifications to ensure you’re not spending money to take in orders you can’t fulfill.

While a crisis can represent unprecedented times, there are precedents you can review in your history to see how you react. Was your product ever cited on a reality show? Spotted on a celebrity? Touted as a miracle cure, if even for a few minutes? Each scenario will resemble the short term spike and tail-off of volume. Review how your bidding tools and inventory management system reacted. That will give you a good idea of how to react both now, AND when the crisis concludes.

Smart bidding

I’m going to address these en masse, as the answer for each is similar. Look at your most volatile times. Every crisis is unique, volatile and requires different tactics to address. Algorithms can’t necessarily differentiate between an unprecedented crisis and a really good Black Friday sale.

You can generally assume that most bidding algorithms will favor the last 30 or so days worth of data. Take a look and see how long the last volatile period affected your campaigns. If it’s a spike then drop (or a valley then climb), make sure the tool is making the right decisions.

A safe way to predict the future is by looking at the past. Find your best day/week and your worst – did Smart Bidding keep up and make the correct decisions? Did it lag behind, and leave spend on the table? Did it react too quickly, or assume the good period would last longer than it did? The answer isn’t simple for any of these questions.

Expect slower creative approval and adjust your automation

When a crisis strikes, everyone affected reacts. In the case of COVID where the world is affected… well, you get the point. That means that Search Engines’ resources will be strained to the brink. You can safely expect a delay in reaction time. Google and Bing both have 48 hour ad approval windows, but rarely stretch beyond… instantaneous. In this particular crisis where support teams are affected, that time can be extended. Yes, that means your new promo ad may not go live while the promo is live.

Adjust your automated rules for any and all creative swaps coming up. Many advertisers have a rule to pause evergreen ads when launching a new set of promotional ads. In a time of crisis, wait until you have absolute confirmation that the new ads are live before pausing the others.

Brand safety and messaging need to be front of mind

It’s challenging to understand and react to brand safety in tumultuous times. Keep your brand principles and voice front of mind and you’ll know exactly how to act. If you’ve planted your flag of “we go where customers go,” then you’ll want to keep doing that. If you’ve planned on excluding any and all controversial content, the answer is simple as well.

Where it gets challenging is for those who tread the middle ground. We’re fine being on news sites, but only if it’s not controversial. This is much more difficult in modern times of crisis, as quite literally every news outlet will, in some way or another, refer to the crisis.

The other challenge to watch is what you say in your ads. Shipping times shift. The ability to make in person appointments, not so much. 24/7 call center? No way. With the advent of responsive search ads, odds are you have some value proposition laying around in there that may be invalid in crisis.

Worse yet, this could extrapolate with panic buying. If a customer sees same day delivery, they will buy from you. If automated tools see your eCTR and conversion rate spike, they’ll prioritize the ad unit. If you can’t fulfill the promise you set, that customer isn’t coming back. It’s a dangerous cycle. The best way to get ahead is to ensure your ads communicate reality with your customers, even if it’s cloudy news.

Take a look at every part of your ads (yes, even those lil ol’ structured snippets) to see what’s still valid.

Watch queries closely, especially Dynamic Search Ads

This is similar to the inventory issue noted above. If there’s something on site that people are searching for, DSA’s and close variants will find it. There’s nothing like a pandemic to be a casual reminder that yes, your promotional swag company DOES sell custom hand sanitizer.

Many companies are placing alerts on their site to address how they’re reacting to a crisis. This can throw a wrench into search queries generated by DSA’s or broader match types.

Pay close attention to queries as they arise. If you’ve actively decided you don’t want to show around any crisis-related queries, add those negatives right away! Exclude any crisis-related pages from your DSA campaigns as a failsafe, and your campaigns should still run smoothly.

Last but DEFINITELY not least – craft volatility alerts

I’ve not been shy about recommending “pops and drops” alerts. They’re crucial at all times for all accounts to ensure we address performance swings with haste without staring at the computer all day and night. In an era of crisis, things move FAST. Waiting until tomorrow isn’t an option anymore.

Instead, think of the metrics you care about where you need an alert. Make the swings something you truly care about (e.g. spend quadruples hour over hour) and consider running your scripts on a more frequent basis.

We’re in the midst of a crisis today. To us humans, these are unprecedented times. To machines, these are periods of extreme volatility. They’re unprecedented times, just like the last unprecedented time. Review what happened during your last rollercoaster ride. Evaluate how your automation performed and adjust for the current period and beyond. 

Remember, the time of crisis is a time to cement your brand voice, a time to speak 1:1 to your customers. It’s a challenging time to be a marketer, but that doesn’t mean you should stop being one.

The post Can we trust automation in a time of crisis? appeared first on Search Engine Land.

]]>
SMX Overtime: When to use PPC automation (and when not to) /smx-overtime-when-to-use-ppc-automation-and-when-not-to-327634 Wed, 15 Jan 2020 13:53:37 +0000 /?p=327634 PPC expert Aaron Levy shares his insights about how to structure accounts to use automation effectively during a rountable discussion at SMX East.

The post SMX Overtime: When to use PPC automation (and when not to) appeared first on Search Engine Land.

]]>

Account structures, cross channel attribution, smart bidding and automation reporting were among the many questions during SMX East’s roundtable, “How automation really works and when to use it.” I wanted to take the time to answer a few additional questions from attendees after the event.

I understand it depends on the business, but how do you go about structuring your accounts? How do you set up campaigns?

I wrote a post about three years ago about decision trees for SEM segmentation that, for the most part, still outlines how we structure our accounts.

For each major element (audience or keyword for search, different audience types for YouTube) ask yourself two simple questions. Will performance be materially different? Does messaging need to be materially different? If the answer to either is yes you can assume that segmenting targeting elements is a good idea.

What’s changed over the years is the priority. When the post above was written, there was no punishment for over-segmentation and no upside for campaign consolidation. In modern search, I recommend keeping as much data together as possible. Yes, you can still optimize a group of campaigns together using a portfolio, but it’s best to keep similar performing ad groups/keywords together unless there’s a notable case to be made for segmenting.

Addressing a few common questions. Yes, we do still segment campaigns by match types but have tested moving them together – the results were neutral. No, I don’t recommend SKAG’s – there’s no point anymore. Yes, segmentation by device is okay, but I wouldn’t call it a “default.” Generally speaking the controls in place to bid or message by device is sufficient to control budget.

How can you use smart bidding when your CPA/ROAS goals change frequently (biweekly/monthly)?

First of all, I wouldn’t recommend see-sawing goals unless there’s an explicit reason. If there’s dramatic inventory swings or a lead at the end of the month worth more than one at the beginning, I’ll allow it. Otherwise, stay stable.

With that out of the way, it depends on how dramatic the goal shifts are and/or if they return to where they were. If the swings aren’t particularly dramatic (e.g. a CPA moves from $100 to $90) you should be totally fine nudging your bid targets without resetting the learning period. Usually, a shift of 5 to 10% won’t disrupt the system too much.

If you’re attacking big sale days, for example, seasonality adjustments will effectively “coach” the bidder and tell it to expect higher or lower swings.

How much time do you save with automation on a weekly basis? Are Supermetrics the go-to with automating reporting?

Tough to say as to how much time it saves since we inherently wind up “reinvesting” that time in our clients! You can look at it on a task-by-task basis to give you an idea of time savings:

  • My teams probably spent five to eight hours per month doing search queries. Since we’ve started (mostly) automating via n-gram scripts, that’s down to one or two hours.
  • Manual bidding? That’s AT LEAST an hour or two a week, now bid optimization or analysis is down to an hour a month. 
  • The idea of pulling manual reports sends shivers down my spine. Even when we had a platform or “human automation,” the reports would still take about two hours per week per client. Now we’ve managed to automate QA (yep – it’s possible) which takes that time down to a half an hour or so.

With regard to reporting, it largely depends on your needs. We license Tableau for clients that warrant it which is a huge help to automate things like pacing etc. But, as the asker indicates, much, if not all, of that can be done via Supermetrics. We often use Supermetrics to parse additional data into Data Studio for cohesive reports. Just make sure to have an alerts system for when a query fails or times out.

How do you use cross channel attribution data for your bidding and optimization?

There are a few options here. Bear in mind that attribution data usually has a somewhat significant delay, or at the very least isn’t real-time.

  • Use your cross channel data to build in a “discount” factor for bidding. Say Google reports 100 conversions a month, but your attribution tool indicates only 60 of them were incremental. Aim your goals to somewhere in the middle, perhaps lower your CPA target (or increase ROAS target) by ~20%.
  • Feed data back into UI’s as a separate column – I wouldn’t recommend using offline data for real-time bidding (too many things can go wrong), but having the data readily available can make decisions easier
  • Use Google Analytics conversions in lieu of the Google Ads pixel – there are tradeoffs here that I won’t go into, but it is a simple way to ensure that the broader marketing picture is considered in Ads

Can you go around the data limitation for smart bidding by adding micro-conversions along the purchasing funnel with fixed values?

Absolutely! Going higher in the funnel has worked well for a number of our clients. Similar to the attribution challenge noted above, use these micro-conversions as directional data. Check your assumptions often to ensure that you’re not putting too much weight in a certain area. Additionally, make sure whatever actions you choose are unique. Don’t add weight to both time on site and page views (they’re one and the same) and don’t double up on different information capture points.

Will we ever be automated out of our jobs?

No. Though I suppose it depends on your job.

Look at it this way, humans are tremendous thinkers and strategists, but horrible automatons and calculators. The parts of our jobs that will be automated away is highly manual. It’s my (and many others’) outspoken opinion that there’s no reason to bid manually under the guise of control. We’re romanticizing the past, looking back at times when we were hand-picking what we thought were the most important signals (keywords, bids, and copy) while tuning out human signals like demographics, behavior and technological capacity. 

The future is showing a job shift rather than a job replacement. Our job won’t be to pull every knob and push every button – it will be making sure the machines are working right. A key trait in paid search success comes from looking under the hood, understanding how the automation works and how to use it to its full potential. 

Keywords are going to go away someday. I don’t know when that day is, but it will happen. Those who take the time to understand system quirks and features will win. Those who try to shoehorn their ways into the old way of thinking into the future will be automated out of a job. 

The post SMX Overtime: When to use PPC automation (and when not to) appeared first on Search Engine Land.

]]>
2020 will be the year to get our data right /2020-will-be-the-year-to-get-our-data-right-327150 Fri, 03 Jan 2020 15:38:43 +0000 /?p=327150 A reliable data flow will help automation tools we started to use in past years do their jobs better.

The post 2020 will be the year to get our data right appeared first on Search Engine Land.

]]>

Contributor and SMX speaker, Aaron Levy, shares his insights about why it’s the “year of getting our data really, really right” and how that will impact paid search automations in 2020.

Below is the video transcript:

Hi, my name’s Aaron Levy. I’m the group director of paid search at Tinuiti. I think 2020 is going to be an interesting year of a mix of some of the things that we talked about for the past couple of years.

It’s already been the year of mobile.

It’s already been the year of automation.

It’s already been the year of people.

I think this year is going to be the year of data. All those things that we’ve been building up for the past couple of years, having Google automate all of our bids and our keywords and things like that that only works with really good data.

So I think that this upcoming year, in 2020, it’s going to be a year of getting our data really, really right. Getting it to flow back and forth, getting it to pass back and forth, so that then these automation tools that we started to use in the past years can do their jobs a little bit better.

I’m pretty excited about it, honestly, because I’m really sick of doing manual bids by myself. Or having my team hit the same button a million times.

If we can get good data into the search engines to have their smart tools do what they’re really good at, that’s how we’re going to do really well in growing into the future.

I think that 2020 is going to be all about data, all about data flow, all about data passed back and all about getting it right.

This is part of a special feature from our community of experts on what successful marketers will do in 2020. Read more >>

The post 2020 will be the year to get our data right appeared first on Search Engine Land.

]]>
What the data tells us about the death of exact match and its impact /data-death-exact-match-impact-272380 Thu, 06 Apr 2017 14:54:04 +0000 http:/?p=272380 Following Google's recent changes to exact match in AdWords, many search marketers were concerned about the impact on their accounts. Now that it's been a couple weeks, columnist Aaron Levy digs through his client data to see whether advertisers' fears have come to pass.

The post What the data tells us about the death of exact match and its impact appeared first on Search Engine Land.

]]>

On March 17, Google announced an adjustment to the functionality of our beloved match types, effectively changing exact match into semantic match. For example, [new york flights] could potentially show for flights to new york, since the presumed intent is the same.

The change affects two major parts of what we advertisers have historically loved about exact match: preserving word order and keeping out filler, or “function,” words.

I won’t go in-depth as to how the change works, as it’s been outlined by a dozen folks already (reading list to follow). Rather, I’ll dive into the data to assess exactly what’s going on. I looked into a trio of accounts in varying industries that should’ve been affected by the change and outlined the findings below.

For some context, the three accounts we looked at were:

  • a home services company. A huge account, broadly built out including geospecific terms and every possible name for each service.
  • a national media company. Another massive one, including names of artists for the media that’s being sold and every filler term (e.g., All Out of Love by Air Supply).
  • a luxury travel company. A smaller account that has a rock-solid foundation, but plagued by the dreaded Low Search Volume tag when exact match builds are extended.

Here are the four fears I had surrounding the change — let’s see if the data dispels them:

Fear #1: Exact match is dead — long live semantic match

Reality: False. The proportion of close variant impressions is decreasing for extensive accounts.

I reviewed a variety of accounts across industries and sizes, ranging from huge to tiny, from local-focused to national. The proportion of volume for exact match queries vs. close variants actually improved post-change. Woo hoo!

The charts below highlight impression volume only, but the metrics held true for click, spend and conversion volume as well.

Now, before I start the parade, it’s important to remember this change wasn’t necessarily meant for agencies like Elite, who are notorious for insanely complex builds. We already have most of the expected close variants covered based on the style of our accounts, so we may be sheltered from the change. We found that smaller accounts like the travel account noted above were affected more than more robust accounts.

Fear #2: Performance will differ for semantic match

Reality: Yup. Not much worse than close variants used to, but this is worth keeping an eye on.

Performance was always different for close variants vs. pure exact match. Occasionally for the better, but usually… not. Much like the section above, we broke out the same three accounts before and after to see how dramatic the difference was between new and old, and if it’s worth being concerned about.

The two larger accounts saw little to no effect — slightly poorer performance post-change for the large media account and slightly better performance for the large home services account. Our smaller luxury travel account is seeing a pretty big swing here to the tune of… well, really bad. I don’t believe the data is even close to statistical significance quite yet, but you should be starting to see a theme here.

Fear #3: Behavior is more than semantics

Reality: To be continued…

This is the area that I’ve always struggled with the most for close variants. First, there are the obvious areas where pluralities or shifting word order mean something drastically different. Let’s say you’re a good PPC manager, and you structured robust brand campaigns including your site URL. There is a pretty big difference between [www soap com] and [www soaps com], but that’s not something Google will pick up. Same goes for [office trailer], where the person is likely searching for a video and [office trailers] where they’re searching for a temp building.

What we’re starting to see with this new shift is an increase in the types of misspellings that are coming up.

For our little travel company, we target a few fancy-pants hotels around the world — and some interesting behavior is coming up. Targeting [Les Bains Hotel] for example, never used to have any close variants. We’re starting to see it show for what we’ll call accent queries like Le Bain Hotel & Le Ban Hotel – basically typing how they’d be pronounced. Seems like a small enough change, but we’re willing to bet the person spelling like that is far less likely to book a trip to the Louvre. Yes they’re semantically the same, but they could indicate a drastically different customer. A person could be less educated, have lower (or higher) income, be a casual observer or window shopper — the list goes on, but you get the point.

We haven’t seen a dramatic shift in behavior yet, but I’m starting to suspect these are the types of queries that are causing our travel friend to struggle. It makes me pleased to see the final fear has been debunked.

Fear #4: Hyper-segmented accounts will be ignored

Reality: False (at least so far).

With this change, I expected to see a lift in average word count for close variant queries, since the old iteration wouldn’t include filler words like to, from, in and so on. Based on the prior two observations, I think you know what’s coming.

I observed more or less perfect stability in the average query length across larger accounts. Since they were already so robust, there was next to no change in matching behavior. If anything, there was an improvement. Since the smaller account hadn’t been fleshed out as dramatically, the updated exact algorithm had to go fishing a little more and was finding new queries that the existing exact terms weren’t already hitting.

So, is this bad?

It depends.

From my observations, the update is doing what it’s supposed to do. The reach of exact match terms has been extended, granting new volume you weren’t reaching before. That can be good or bad, depending on the industry, linguistic meanings, double entendres and so on. You may find there are a lot more positive terms out there you weren’t covering with your existing build.

For smaller accounts, you may have to do a bit of work on structure and negatives, as the change is clearly having an effect. For larger accounts, years of solid maintenance and best practices mean we’re in great shape.

Bigger picture, this represents a shift away from keywords and toward user-driven marketing, in line with what we’ve seen over the past year. I’ve always been a staunch advocate for rigorous segmentation by intent + performance — the future will be no different. However from my observations, Google’s semantic and intent-driven matching still has a ways to go before I’ll turn the keys over 100 percent.

Further reading

The post What the data tells us about the death of exact match and its impact appeared first on Search Engine Land.

]]>
Should you create device-specific PPC campaigns? /create-device-specific-ppc-campaigns-264902 Fri, 16 Dec 2016 16:14:31 +0000 http:/?p=264902 Columnist Aaron Levy, a big advocate of PPC segmentation, discusses the pros and cons of device-specific AdWords campaigns and shares tips on how to get started.

The post Should you create device-specific PPC campaigns? appeared first on Search Engine Land.

]]>
Should you split your device campaigns apart?

Should you split your device campaigns apart?

On February 6, 2013, Google announced Enhanced Campaigns, coupling devices together to encourage (nod nod wink wink) the development of mobile-first campaigns. Bing, of course, followed suit soon after.

The PPC world at large was not pleased. I think I speak for many of my peers when I say we’re a group that relishes precise control. While device-coupled campaigns offered mobile-specific messaging, they removed the ability to target mobile/tablet-only keywords or to budget by device.

In late May of this year, Google was kind enough to reverse this most unpleasant change, allowing device bid modifiers as low as 100 percent for each device type. In a manner of speaking, device-specific campaigns are back. Yet there seems to be a bit of chagrin when it comes to relaunching our device-only campaigns, be it due to rising complexity, decreasing tablet volume or the feeling that on the whole, bid modifiers will cover it.

I’ve written previously about my affinity for segmentation, so it should come as no surprise that I (generally speaking) advocate device-specific campaigns. That said, it’s not a simple yes-no decision any more, as there are far more factors at play to ensure account stability.

I’ll walk through some of the pros and cons of device-specific campaigns, then provide some tips for designing them.

Pros of device segmentation

Isolate growing devices. Non-desktop volume is as high as it’s ever been and shows no sign of slowing. We took a representative sample of our largest B2C clients at Elite SEM (my employer) and found mobile made up 60 percent of clicks over Cyber Weekend in 2016, up from 54 percent in 2015.

Adjust bids based on device cadence. Mobile, tablet and desktop show far different performance cadences. I took a sample of a dozen or so enterprise clients across B2B and B2C and found that (shockingly) mobile PPC traffic has a tendency to spike on weekends and in evening/commuter hours, tablets remain more level, and desktop peaks midday. While you can adjust by device by time of day using a device/time-of-day script, it’s far simpler in separate campaigns.

device-demand-curve

Mobile PPC traffic on computers, tablets and smartphones by day of week (left) and time of day (right) for a sample of Elite SEM agency clients.

Embrace demographic differences. Household income (HHI), age + gender volume and conversion distribution are all skewed by device. Based on the same dataset above, mobile customers tend to have a lower average order value (AOV) or be more impulsive, while converting at a lower rate. Tablet buyers tend to be older females, while desktops skew slightly male. While these are all bid modifiers, there’s no simple way to modify HHI and gender by device.

Device-specific messaging. In line with performance cadence changes listed above, it’s abundantly clear that searchers have different appetites, needs and patience depending on their device(s). Notwithstanding, the growth of voice search is changing searcher behavior on mobile devices, which requires a separate keyword set as well — a difficult feat to accomplish in combined campaigns.

Performance-based budgeting. Perhaps the biggest asset of device-specific campaigns is budgeting! Creating device-specific campaigns will allow you to distribute budget as you see fit.

Cons of device segmentation

Increased complexity. No shocker here: if you create device-specific campaigns, you’ll have three times as much to monitor. In my experience, it doesn’t necessarily triple the workload, but the fact of the matter is that there are more places to adjust, which in turn yields a larger time investment.

Data shrinkage. If an average campaign gets 100 clicks a day and you craft device-specific campaigns, you’ll likely wind up with 60 mobile clicks, 30 desktop clicks and 10 tablet clicks. When you segment this granularly, you may lose the ability to make campaign-driven or even keyword-driven bid decisions because the dataset is simply too small.

No desktop exclusions on Bing (yet). As of the time of this post, there is no way to create 100 percent device-specific campaigns on Bing, as there is no -100% Desktop bid modifier. My pals at Bing say this is a common request and is likely to change soon; but for now, you’d have to manage both engines completely separately.

They’re not technically supportedYou heard right. While Google acknowledges that device-specific campaigns are possible, they’re not supported. You generally need to hack a few things together to ensure full functionality with bidding platforms, namely that back-end settings need to be changed to make the bid tools work properly.

We’ve already seen a few weird quirks pop up — call extensions don’t always create the forwarding numbers if there’s no mobile volume. App extensions don’t always show for tablet campaigns. Bid tools don’t quite behave how we’d expect. Nothing truly problematic, but worth noting if you’re going that route.

We started to hit account limits. I never thought I’d see the day when a million targeting options wouldn’t be enough, but we’re getting there. If you’ve read my previous posts on segmentation, it makes sense, and really, an account with a few hundred campaigns could easily hit limits across channels.

Creating device-specific campaigns

Now that I’ve potentially convinced you at least to create a few device-specific campaigns, there are a few things that I’d recommend you keep in mind as you break things out.

Start with a base device per campaign, focusing on the top volume driver. If a given campaign is 70 percent mobile, clone it twice and devote the other two to desktop and tablet.

Do a bit of napkin math to determine if you think you’re at risk of hitting account limits. On a base level, you should expect three gender modifiers and six age modifiers per ad group, plus likely an RLSA list or two. This would translate to every ad group counting at least 13 ad group targeting items towards the five million ad group targeting items per account limit.

At a campaign level, you’ll have at least six geotargeting settings (the five HHI levels and national), plus likely five to seven bid flights and campaign level negative keywords. This would make every campaign count for about 12 toward the one million setting limit.  My math is approximate, of course, but it gives you a starting point.

Set new bids according to your previous mobile modifiers and/or conversion rate. If you don’t necessarily trust your old modifiers, bid based on conversion rate with a small buffer (15 percent) to account for cross-device lifts.

Wipe (or reset) your bid modifiers for device-specific campaigns. Remember, the whole point of this was to bid smarter by device.

Evaluate existing search queries, and determine if you’re missing out on any device-specific terms (like those that may be specific to voice search).

Don’t forget to evaluate your tracking settings and marketing messages at the outset before collecting data. You’ll want to wipe all tracking URLs and start anew with your shiny new device-specific campaigns.

As with any other change, migrate SLOWLY. It’s better to start collecting data a bit at a time than to rip the Band-Aid off. Make the migration during a quieter season, lest you disturb the balance during your busiest time.

So are device-specific campaigns a must?

In short, no. But they should be strongly considered. While I’m clearly a staunch advocate of segmentation, the fact of the matter is that it’s not an automatic choice. Device segmentation isn’t right for every single campaign, keyword or even website.

That said, I strongly urge everyone to evaluate the pros and cons noted above and figure if device-specific campaigns can help to benefit their accounts.

The post Should you create device-specific PPC campaigns? appeared first on Search Engine Land.

]]>
Perfect Your SEM Testing: How & Why To Evaluate Everything: SMX East 2016 /perfect-sem-testing-evaluate-everything-smx-east-2016-261180 Mon, 17 Oct 2016 17:45:31 +0000 http:/?p=261180 Looking for inspiration when it comes to designing and running tests in your PPC accounts? Columnist Aaron Levy recaps a session on this topic from SMX East 2016.

The post Perfect Your SEM Testing: How & Why To Evaluate Everything: SMX East 2016 appeared first on Search Engine Land.

]]>
smx-east-2016

A wiser man (or woman) than I coined the quintessential acronym of paid search. Rather than the ABCs of sales (Always Be Closing), savvy paid search managers live and die by ABT — Always Be Testing.

In a session presented recently at SMX East, three of the brightest in the industry shared their tiers of testing strategies, successes and failures. “Perfect Your SEM Testing: How & Why To Evaluate Everything” featured insights from Ted Ives, owner of Ted Ives Consulting; Jake Pinocchio, a digital marketing associate at Red Ventures; and Susan Waldes, president of Fivemill.

Perfect Your SEM Testing — Ted Ives

Ted Ives kicked the panel off with a focus on data collection. He asserted that the first thing an advertiser or agency should do is to step back and assess the data you need to measure. He stressed that his ultimate goal when he tests is to maximize profit for his clients.

Ives’ testing philosophy is contrarian to my Always Be Testing mantra expressed earlier. It’s important to remember that testing 100 percent means at least 50 percent of it will be underperforming. Instead, Ives recommends evaluating potential negative impact and testers’ workload before structuring a test.

The most prominent test that Ives explored centered around landing pages. He assessed value of a product page relative to specific landing pages. He found the strongest surface-level performer (the gated form) led to less engaged customers and less revenue. He suspected content in a walled garden was a friction point, forcing customers to convert before they were ready.

 

tedives

He believed that many potential customers were actually coming in via a phone call. Echoing the theme of measurement first, he implemented a call-tracking suite to measure impact.

To conclude, Ives reminded us that measuring success often has to go deeper than surface metrics and relies on significant data. While there are a myriad of statistical significance testing tools, they aren’t always correct. His favorite tool for evaluating significance is the tool built by Cardinal Path.

How To Be A Full Funnel Marketer — Jake Pinocchio

Jake Pinocchio was next up. He does testing at Red Ventures, a truly full-service firm that handles leads through every aspect of the sales funnel. So Pinocchio has the opportunity to test every aspect of the marketing funnel to maximize the value of every lead. He walked the audience through four tests, each exploring different parts of a conversion path.

His first example was testing different product tiers for a credit card company. There were two different products available. Product A was simple to get approved for and yielded higher numbers of total users, but at a lower lifetime value. Product B was a premier product that had lower volume but a better long-term customer. Pinocchio found that while the premier product had lower volume, the value per conversion was much higher.

Pinocchio’s second test sought to compare whether a lead form or a call center would better convert potential cable customers. He hypothesized that driving customers to an IVR system would be more efficient than forcing them through forms. Using click-to-call instead of a traditional campaign, he was able to manipulate the conversion path. He was able to shift the proportion of conversions toward the IVR system. While the total number of conversions remained flat, the proportion of calls increased by over 50 percent. The end result was a 20X increase in net profit.

His third test supported a theory that I’ve often wanted to explore. Pinocchio theorized an “Extension Tax,” whereby you wind up paying more for having larger extensions. He theorized that a call extension on desktop ads led to fewer ad clicks, decreasing CTR and punishing quality score.

pinocchio

He removed call extensions from his campaigns, expecting to make up the volume in form conversions and decreased CPCs. The results supported his hypothesis — eliminating call extensions led to a 16-percent decrease in CPCs and a lift in form conversions. While this particular test yielded a lift in profit, Pinocchio cautioned the audience to dig deep. Make sure the front-end metrics support the back end.

His final test sought to explore an age old question: “Does enhanced cost-per-click (ECPC) work?” Pinocchio and his team found that ECPC worked as promised. It yielded an increase in conversion volume at a similar budget. Net profit increased in this particular case, but Pinocchio cautioned that ECPC could backfire. Google doesn’t view total profit metrics — a decrease in back-end revenue is possible.

AdWords Drafts & Experiments (Answering the Age Old SEM Mysteries) — Susan Waldes

Closing the panel was Susan Waldes of Fivemill. Before she got rolling, Susan stressed the importance of understanding what really counts as a test vs. what’s just doing your job.

The most common and valuable form of testing, according to Susan, is good old-fashioned A/B testing. To ease A/B testing, Google’s given us a new tool called Campaign Drafts and Experiments (CDE), which replaces the now-defunct AdWords Campaign Experiments.

The major benefit of using CDE for landing page and ad copy tests is that you can control traffic split. Echoing Ted Ives’ point from earlier, you don’t have to use half of your budget for a test. Waldes recounted an example of when she wanted to explore how crazy traffic swings would be if she doubled her bids. You’d never execute such a test with all or even half of your traffic, but by using CDE, she was able to test on a small scale and adjust.

In addition to campaign testing, Waldes explored the Drafts feature of CDE. Drafts fills a need, allowing you to forecast scenarios and create new, filterable datasets. Drafts also allow convenient sharing of information outside of AdWords editor, a vast improvement over old AES files.

By using CDE, Waldes was able to answer a few of Paid Search’s age-old questions. If you’re running keywords in a few campaigns, Waldes says you can use negatives to direct traffic without a noticeable loss in impressions. In testing CPC against AdWords CPA bidding, Waldes found that manual bidding was more efficient and yielded better leads.

susanwaldes

The resounding theme of the session was that as marketers, we need to dig a bit deeper beyond surface metrics. It’s our responsibility to identify the tangible impact of tests, rather than relying on surface metrics.

The post Perfect Your SEM Testing: How & Why To Evaluate Everything: SMX East 2016 appeared first on Search Engine Land.

]]>
What’s the best attribution model For PPC? /whats-best-attribution-model-ppc-252374 Mon, 27 Jun 2016 16:58:37 +0000 http:/?p=252374 Columnist Aaron Levy explores some common attribution models used by digital marketers. Which one is right for your business or client?

The post What’s the best attribution model For PPC? appeared first on Search Engine Land.

]]>
ppc-targeting-strategy-ss-1920

Last-click attribution has been the default for digital marketers since the beginning of time. It’s easy, it’s tangible, it’s close to the finish line, and in general, it makes us PPC managers look great. And that’s the problem.

Let’s face it — the last-click metrics we’ve been operating on for years have been greatly over-assigning credit to the last interaction. Think about it from a sporting perspective. We all love watching Usain Bolt close out a relay race by sprinting 30 miles per hour, but could he win the race if his team drops the baton? I personally love watching Jordan Speith drop putt after putt in every golf tournament that he plays in (maybe I’m the only one…), but does that matter if he hit his first two drives out of bounds? (In my golf world, that works just fine — but that’s a story for another day.)

You get the point: Last-click is wrong. Unfortunately, first-click attribution is wrong as well. So is position-based, time-decay… even data-driven attribution has some major flaws.

So, which one is right? That depends on goals and what story you want the message to be. The fact of the matter is that attribution modeling is completely, 100 percent made up. It’s up to you to assign credit to the metrics and channels that are most important to your business or client.

If every attribution model is wrong and right at the same time, which is the best for you to use? All of them? None of them? It’s confusing.

Let’s break the common attribution models down based on their pros and cons to figure out which works best for each scenario.

AdWords Attribution Models

Common attribution models. TOFU = top of funnel. [Click to enlarge.]

Generally speaking, your position in the attribution chart is mostly based on your broad-funnel marketing goals.

If your goal is to be as efficient as possible, your best bet is to get close to the bottom of the funnel; last-click attribution is the most extreme of the efforts here. I haven’t been shy about my distaste for last click, so I obviously won’t recommend that. If your goal is PPC efficiency, and outward growth isn’t as important, I recommend a time-decay model.

If your focus is on growth and new customer acquisition, closer to the top of the funnel is better. First click won’t do for the same reason as last — it only looks at one tiny piece of the pie.

My ideal recommendation unfortunately isn’t offered by Google in AdWords or Analytics. I’d love to use a reverse iteration of time decay (known as first preferred in many bid tools), but since it’s not available, we’ll have to settle for position-based (or u-shaped).

Should I take advantage of Google’s data-driven attribution modeling?

Maybe. It looks good on paper. Google uses a myriad of user behavior data and general math-y smartness to create a variable model that’s most likely to maximize total conversion volume. In a perfect world, this would mean you’d find a data set that would increase your AdWords conversion volume through the roof.

The problem with in-AdWords data-driven attribution is that it’s a single-channel view. While it’ll assess every single aspect of your AdWords data to make a decision, it will only look at AdWords data to make a choice. You’ll have little influence as to whether or not you’re targeting new customers, and you won’t be able to identify conversions that may have come through other non-search efforts like email or social.

That’s not saying I’m totally against it. Single-channel data-driven attribution like Google’s put together can be quite powerful if you have the right data set to support it. Google recommends a minimum of 20,000 clicks and 800 conversion actions in a given month. Frankly, this sounds low on my end.

If your conversion lever is simple and high-funnel (think a button click, key page view and so on), and you have well over 1,500 conversions a month, by all means use this tool to your advantage. If your campaigns are more modest and your goals more detailed, however, stick with position-based to start and adjust based on your goals.

So what’s the best attribution model for PPC? Just like everything else in our industry, it depends!

The post What’s the best attribution model For PPC? appeared first on Search Engine Land.

]]>
Decision trees for SEM segmentation /decision-trees-sem-segmentation-250280 Thu, 02 Jun 2016 15:01:34 +0000 http:/?p=250280 There is an ongoing debate amongst SEM experts as to how to best segment your PPC accounts, and columnist Aaron Levy illustrates his process with a handy flowchart.

The post Decision trees for SEM segmentation appeared first on Search Engine Land.

]]>
tee-1252397_1920

One of the most common and difficult choices you’ll make when managing an SEM account is deciding when to segment keywords. Any search manager worth his or her salt knows that segmentation is key to PPC optimization, but what’s the best rule for how granular you should get?

There are a few different camps on this. There are the long-time advocates of Single Keyword Ad Groups (team SKAG), and even the occasional Single Keyword Campaign. Then, there’s the nearly decades-old “1,000 searches rule,” wherein if you expect a keyword to have more volume than 1,000 searches a month, it needs to be segmented, regardless of messaging. Some people (myself included) religiously believe that match types should be segmented by campaigns, while some simply smash everything together.

Regardless of keyword segmentation strategy, most managers rely heavily on a hybrid of bid portfolios and engine bid modifiers to take care of performance nuances for us. A little ironic, no? We were (and still are) collectively incensed that Enhanced Campaigns hampered ability to segment devices. “They’re taking away our control,” we screamed and blogged. Yet for some reason, we wholly embrace bid modifiers to take care of everything from audience to geo to behavior.

My issue with bid modifiers is simple: They don’t move budget, and they don’t change messaging. To do that, you need to segment intelligently to ensure you’re making the smartest decision with every dollar invested.

To help with the process, go through a decision tree with each keyword and customer segment. The tree below only has two major segmentation decisions you have to make: Is there a significant performance delta (Does it have a big enough difference to matter?), and should messaging be different?

PPC Decision Tree

A quick caveat before we get started: This decision tree is merely an example I’ve used to illustrate the thought process that goes into SEM segmentation. You should not structure every single account based solely on this tree. But, at the very least, it should get you thinking about how to best optimize your own account structure.

Performance path

Delegating budget based on performance is hands-down the most important decision you should be making when deciding how to segment an account. Our job is to squeeze every single solitary cent of revenue we can out of the money invested. As it stands now, most of us are hyper-focused on testing and tweaking every single possible modifier to ensure we’re bidding as efficiently as possible.

I recommend setting three performance thresholds to start your decision path. It’s up to you to set these thresholds; as a starting point, a >30 percent performance delta can start you down a customer segmentation path, a 10 percent to 30 percent delta can send you down the message segmentation path and a <10 percent delta can be kept “as is.”

If an account is brand-new, head straight for the messaging path. Just leave room to chop up later as data accrues.

Customer path

Is the searcher an existing customer or a new customer?

We can discern between existing and new customers fairly easily, thanks to our dear friends, “Customer Match” and “Remarketing.” Set up both a Remarketing list for converters and a recurring Customer Match upload to creative customer audiences, and segment them out into defined existing customer campaigns. Once you start collecting data, you can identify the differences between the two behaviors and segment accordingly.

If a new customer, proceed to the “Familiarity” section of the Customer path. If an existing customer, proceed to the “Activity” section of the Customer path.

Is the potential new customer familiar with your brand?

Brand awareness can play a huge part in paid search performance. It can rear its head in a variety of ways. Sometimes, you might see a regional uptick in brand awareness based on where other advertising is strong or competition is weak. In other cases, it can be dependent on the individual customer’s viewpoints. I’ve seen CTR vary by upwards of 400 percent (with an associated 800 percent lift in ROAS) where brand awareness is high.

If a new customer has been exposed to the brand, segment them into their own campaign. The simplest way to do this is via RLSA (where you can often afford to bid more), but regional segments can isolate performance as well. If a new customer is unfamiliar, proceed to the Messaging path (below).

Has the existing customer lapsed to a point of no return, or are they active?

Every potential buyer likely has some version of brand affinity, showing commitment bias to one brand or another. Depending on your own customer life cycle, there is likely a point when, if a customer is lapsed for too long, their affinity and likelihood of ever returning dwindles down to nothing. This can vary from years (for high-ticket items like a house, a car or a high-end B2B software purchase) to months (for seasonal turnover items like shoes or clothing) to weeks (for impulse purchases like entertainment or small tchotchkes).

If your customer falls into the “Active” segment, they’re more likely to be a repeat purchaser and still share some of that same brand affinity, and you can safely proceed to the Messaging path. You’ll have to treat your dormant customers the same way as you would treat those who are new to your brand. Segment them either at the ad group level or campaign level (your call based on performance), and offer them something new to get them to return.

Messaging path

Does messaging need to vary between the keywords or audiences?

If the answer is yes, segment. If the answer is no, don’t.

Where it gets a little complicated is when you do decide to create a new messaging segment. At the time of this publication, there’s no silky-smooth way to customize copy based on a geographic region for any major search platform; while there are scripts to run within AdWords to customize copy by region, they’re clunky and not particularly effective if you have a large number of different city names. There are tricks in Bing Ads using param functionality, but again, they’re clunky at best.

Instead, the burden falls on the advertiser to make the decision whether to segment a campaign out by region just to vary messaging or not. This is an experiment that can get out of hand quickly; in my experience, there’s usually a click-through rate increase of about 25 percent for regionally specific copy when done well.

The test can get out of hand in a hurry, however; there are 100+ major metropolitan areas in the US alone. Does that mean we should clone our campaigns 100 times over? Of course not. Probably not. Maybe not? Which brings us to our final topic…

Decision thresholds

When you’re looking at 120+ possible audience segments for each term, things can get out of hand quickly. While the decision tree is not designed to make a campaign manager’s life miserable, but that can happen if applied too strictly.

The reason for hyper-segmentation is to maximize the efficiency of every dollar spent. To maximize efficiency and make smart bidding decisions inherently requires a certain level of data, even for the most sophisticated of tools.

There’s no golden rule here — no silver-bullet segmentation threshold. Instead, think about the quantity of data that would be needed to make a sound bidding decision based on a given industry. A safe rule of thumb is to use Google’s threshold for active bidding tools — around 15 conversions in 30 days should be significant enough for a segment to be broken out. Any smaller than that, and you’ll only be doing damage.

Too much extra work?

Not a chance! The most common objection to this strategy is that, by moving keywords around or cloning them into separate campaigns/ad groups, you’ll wind up with a ton of duplicated effort and extra work. On the contrary, I ran the data on myself using Toggl (my time-tracking tool of choice) and found that I spent ~20 percent less time managing the day-to-day on an account after hyper-segmentation than before — with stronger performance to boot.

Segmentation is your ally — not your enemy.

The post Decision trees for SEM segmentation appeared first on Search Engine Land.

]]>