White Label SEO
seo in dubai Andreas Reiffen – Search Engine Land News On Search Engines, Search Engine Optimization (SEO) & Search Engine Marketing (SEM) Thu, 11 Apr 2019 16:06:07 +0000 en-US hourly 1 Going head to head with Google Smart Bidding: The good, the bad and the weird /going-head-to-head-with-google-smart-bidding-the-good-the-bad-and-the-weird-315242 Thu, 11 Apr 2019 15:45:14 +0000 /?p=315242 The first round of testing revealed Smart Bidding's strength but other tests showed where there is room to grow – and some were just downright odd.

The post Going head to head with Google Smart Bidding: The good, the bad and the weird appeared first on Search Engine Land.

Google Smart Bidding, the search giant’s machine learning bidding strategy, has ruffled more than a few feathers in the industry. And for good reason. It commands thousands of real-time bidding signals at its (virtual) fingertips. It can analyze 70 million signals in 100 milliseconds. For retailers casting longing glances at its results so far, things look tempting.

Especially its Target ROAS (tROAS) approach. After all, if you’re merely seeking better return on ad spend, why do you need to put together a complex search solution? Why hire expensively paid search managers, bid technologies and agency services when you can hand your budgets off to Google and let them do their magic? Well, as with all things that seem too good to be true, the right approach to working with Smart Bidding is much more complex.

With the help of a client, Crealytics (my company) went head to head with Google’s tROAS bidding. We saw a spectrum of results, but now we’ll discuss the findings from a drafts and experiments set up. (In the next article, we’ll discuss the results within a geo split environment.) For context, these were the rules:

  • A control bidding approach, tested against tROAS bidding, within a drafts and experiments set up.
  • We set the test environment up as an auction split, A/B comparison. However, we’ll see later how this approach was inadvertently changed, and not for the better.
  • Our target was to maximize revenue and new customers at a specific ROAS.

Our first round of testing yielded results across the spectrum. Some showcased the strengths of Smart Bidding, some showed where it still has room to grow, and some were just downright odd.

The Good: Auction time bidding – more clicks and lower CPCs

I mentioned that Smart Bidding benefits from a powerful engine. If you look under the bonnet, you’ll see the second set of signals that other bidding platforms don’t have. Matter of fact, Google freely claims that no other bidding platform will have access to a significant portion of these signals. Target ROAS draws on moment-of-query-time information for precision bids. As a result, it knows exactly where on the S-curve it needs to bid; and will never bid a cent higher than necessary to win the auction.

Such findings require some caution, however. We still aren’t sure the degree to which drafts and experiments (in conjunction with tROAS) shift volumes around internally. Pre- vs. post-comparison shows little movement; revenue only increased when spend increased. The pie shifts around…but the overall click volume and resulting revenue don’t grow much larger.

Both test markets saw marked increases in both clicks and ROAS…

…while maintaining low CPCs at the same time

The Good: More balanced bids

Google’s tROAS can also aggregate all segments at the same time. As a result, it derives the most optimal bid. Our findings bear this out: tROAS achieved more consistent performances across different segments, like devices and locations.

The Bad: No ability to optimize for advanced metrics

We always encourage clients to work towards more advanced KPIs. As an example, identifying new customers helps pave the way to judging their lifetime value (CLV). But in the current environment, Google’s Smart Bidding doesn’t seem to optimize for incremental value. New customers offer a case in point. Because the platform can’t account for this group, acquisition costs shoot upwards, and existing customers are targeted with equal value as new customers. The net result is that using new customer acquisition rate as a KPI remains a challenge under Smart Bidding.

Testing found that Smart Bidding’s algorithm overrode RLSA bid modifiers…and failed to exclude known visitors in the NC campaigns.

We also discovered that tROAS inadvertently focuses on users with high conversion propensity (i.e., strong site interaction and high recency). Audience exclusion – Google’s efforts to exclude existing customers – failed to work during the test period. Instead, it was bidding in the campaign that focused on users with more incremental conversions.

The Bad: In a race against dynamic promotions, Smart Bidding will lose (for now)

With any major account change (new product groups, new targets), tROAS regresses to its learning phase. As Fred Vallaeys mentioned previously, certain unique factors still cause machine learning to slip up. Targets will often change in respect to sales promotions, news coverage, etc. So be warned: Advertisers with highly dynamic promotional calendars should always keep an eye on proceedings (bidding manually) to avoid automated getting left behind.

The Weird: Smart Bidding tends to overwrite overlapping conversions

Drafts and experiments can use either a cookie-based or search-based split. Either drafts and experiments didn’t work with tROAS properly, or the auction eligibility changed the moment tROAS activated.

In shared customer journeys spanning both A and B campaigns (i.e., the same user was exposed to auctions won by both tROAS and our bidding approach sequentially), tROAS won the conversion 40 percent of the time. We won the conversion 21 percent of the time.

In other words, tROAS is more likely to overwrite existing cookies when a conversion is going to take place, which makes no sense given it should be a 50/50 split test.

The Weird: Decent ROAS, lower profits

Testing in a drafts and experiments setup simply can’t compare to a geo-split test environment. As noted above, campaign interference gets in the way of concise results. So, it’s no surprise our tests delivered many ambivalent results.

Top-line results promised a lot, even with a loss of efficiency in new customer acquisition.

On the flip side, our approach appears more profitable than tROAS bidding. Despite relatively small ROAS differences in Market A, we saw a wider ROI gap. Why, exactly? Because the ROI figures follow the New Customer percentage tendency as well as overall efficiency.

A lower new customer rate, particularly in the final weeks of our testing, widened this gap:


Ultimately, we determined that the shift in settings and the resulting change in volume settings meant a new test needed to be run. Yet there were still significant learnings to be had:

  1. Google drafts and experiments is a complete black box. There’s no way to unpack the selection criteria, optimization signals or decisioning systems. Retailers testing using drafts and experiments need to be prepared to lose all visibility into what makes their campaigns tick.
  2. If ROAS is your main KPI, you’ll likely see an upswing in performance. However, there is a strong case to be made for moving towards a more advanced metric. (See here for our POV on margin and customer lifetime value.)  Gaining access to bidding signals that are uniquely available to Google indeed yields benefits to the advertiser.
  3. Smart Bidding still has some growing to do to be useful for complex advertisers. We saw latency when reacting to dynamic promotional calendars and felt constricted when bringing in third party and attributed data sets.

In my next article, I’ll explore how Smart Bidding competes in a more transparent and open testing environment. Stay tuned…

The post Going head to head with Google Smart Bidding: The good, the bad and the weird appeared first on Search Engine Land.

Product listing ads, digital co-op and the $13 billion opportunity /product-listing-ads-digital-co-op-and-the-13-billion-opportunity-308419 Tue, 20 Nov 2018 19:17:53 +0000 /?p=308419 By joining forces with brands, retailers making a PLA co-op spend can bid more aggressively and expand their profile where it matters.

The post Product listing ads, digital co-op and the $13 billion opportunity appeared first on Search Engine Land.


In October my company, Crealytics, hosted a conference called [New York] Know Go. This one-day event focused on one of search marketing’s most exciting new chapters: the rise of digital co-op.

The day sparked numerous questions. What does the landscape look like for digital co-op today? How can retailers and their partners avoid its potential pitfalls? And why should we be so excited about a practice still shaking off a pre-digital hangover?

Stuck in the past

Co-op marketing has existed for a long time. Since the late 19th century, retailers have sought higher turnovers via marriages of budgetary convenience. Brands agree to share the cost of an ad with a retailer for more product exposure.

Spend two minutes looking and you’ll find examples of this collaboration everywhere: from billboards to catalogs and magazines to TV commercials. In fact, co-op’s profile feels disproportionately offline. Fully 80 percent of the world’s available co-op funds flow into non-digital channels.

But advertisers have woken up to digital co-op’s potential. For one thing, it promises brands a better understanding of where their budgets go. Unlike the co-op of old, a search marketing equivalent supports a cleaner, more accountable process. There’s also more potential for retailers, for whom PLA acts as primary sales driver.

Let’s look at the numbers.

Co-op’s overall market size stands at $70 billion. But just 20 percent of this is used for digital advertising. Additionally, we estimate that eCommerce pure players spend 80 percent of their budgets on digital channels. Applying this behavior to the wider market unlocks serious long-term opportunity. The gap (from 20 percent to 80 percent) closes. The industry sees a dramatic redistribution of co-op spend, mainly to digital. This leaves $42 billion up for grabs.

Thanks in no small part to its co-op fueled ad business, Amazon seem likely to claim around half of that amount. Meanwhile, Google and Microsoft Bing recently launched solutions that allow brands to subsidize a retailer’s PLA spend, increasing their marketing pressure and driving additional sales. Given these latest developments, we can assume that search marketing will swallow around 80 percent of the other half – about $16.8 billion. And product listing ads (PLAs) will soak up most of this figure. Even a rough estimate, let’s say 80 percent, puts future PLA co-op spend at more than $13 billion.

Amazon leads the race for colonization

Anyone who’s monitored Google and Amazon’s trajectory reaches a similar verdict. Once upon a time, Google was just a search engine. Amazon, a retailer. But the lines have blurred, as both increasingly reflect each other’s USPs. Things get especially intriguing when it comes to sponsored product models: a key piece of the co-op puzzle.

What are Amazon sponsored products?

Amazon’s sponsored products are keyword-driven, PPC ads that push traffic to a particular product page within Amazon’s own ecosystem. Crucially, these ads never link out. Brands that use them can boost their rankings…and drive additional sales.

The eCommerce giant clocked up over $2 billion in ad revenue…in just Q2 of this year. And its sponsored products accounted for over 80 percent of this figure, mainly thanks to small- and medium-tier Amazon sellers. This is in marked contrast to Google, which makes the bulk of its revenue from big brands and agencies.

Amazon recently extended the reach of its sponsored products offsite, via its Extended Ad Network (in the form of display retargeting). As a response, Google announced a co-funded Shopping Ads beta. And Bing has also joined the game, trialling its own co-op bid platform in the U.S. market.

We’re seeing a battle between Amazon on the one hand, and the rest of the retail market partnering with Google on the other. Amazon leads the game. Jeff Bezos’ empire remains larger than the rest of the retail market combined. So how do the latest moves in co-op advertising compare? Is Amazon strengthening its position? Or is the rest of the market catching up?

An era of co-opportunity

Advertisers prefer PLAs over text ads for many reasons. But with popularity comes competition. It’s now a highly competitive marketplace and, truthfully, many retailers struggle to afford the top positions. This prompts a domino effect for brands who sell through these retailers. Their products don’t show up, get squeezed out of valuable real estate and simply aren’t found.

Collaboration offers an antidote. By joining forces with brands, retailers have bigger paid search budgets to play with. They can bid more aggressively and expand their profile where it matters.

As with many early-stage systems, certain roadblocks have slowed PLA co-op’s path to scale:

To leverage PLA co-op, retailers and brands have several options:

Google and Microsoft Bing’s solutions offer brands the possibility to “boost” bids set by the retailer. Those digitally savvy enough can use the platform to manage bids on a granular level. Google then provides the estimated incremental impact via dedicated reporting. The lure of full control is highly tempting. Time, effort, and the risk of not hitting targets can stagnate many brands.

Because a brand can easily allocate their budget across several retailers, freedom of choice ensures that no one retailer can monopolize a brand’s entire budget. This completely levels the playing field for brands and retailers alike.

Alternatively, following a new and sophisticated methodology presented at [New York] Know Go, retailers can manage co-op PLA campaigns for their brands directly.

This sees the retailer leverage its relationships, acquire the brand’s budgets directly and manage campaigns towards specific targets. Such an approach implies that the retailer reallocates 100 percent of retargeting spend to prospecting. Meanwhile, the brand can retarget select audiences like website visitors, audiences similar to website visitors, or in-market audiences.

The post Product listing ads, digital co-op and the $13 billion opportunity appeared first on Search Engine Land.

A kickback stampede: Why Google’s EU comparison shopping program may carry risk /a-kickback-stampede-why-googles-eu-comparison-shopping-program-may-carry-risk-304569 Wed, 29 Aug 2018 13:40:00 +0000 /?p=304569 Google’s ploy to give comparison shopping service providers more market share has triggered a gold rush. Here's why contributor Andreas Reiffen doesn't think that's a good idea.

The post A kickback stampede: Why Google’s EU comparison shopping program may carry risk appeared first on Search Engine Land.

Cast your mind back to June 2017. The birds sang, the sun shone and Google received a record-breaking fine of €2.4 billion ($2.8 billion).

But why, exactly? In a verdict that provoked differing opinions, European Union (EU) regulators ruled when it came to its search results pages, Google favored its own search results over other comparison shopping services (CSS). Comparison shopping services such as Kelkoo or Shopzilla operate by aggregating online retailers’ product offers.

Originally the search engine giant limited comparison shopping services to showing text ads only. Following their complaints, however, Google made it technically possible for them to show product ads. But this changed nothing for the CSS providers, as retailers saw no benefit in an additional intermediary. Further pressure followed, culminating in the eye-raising penalty and an antitrust order that Alphabet’s subsidiary do more to even things up.

The Shopping platform is big business for Google. Top retailers pump huge amounts of money into the format, more than they do on text ads.

Image source:

Faced with the risk of further penalties, including the shutdown of its Shopping service altogether, Google needed a quick response. Twelve months later, the solution looks bright for EU retailers, Google now offers discounted rates to retailers that use its new Comparison Shopping program.

Provided they work with a registered CSS partner, accounts can enjoy cost per clicks (CPCs) up to 20 percent lower than regular Shopping auctions. Additionally, Google will offer any certified CSS partner a monthly rebate (referred to as “SpendMatch”) of up to $37,400.

A kickback stampede

Google’s ploy to give comparison shopping service providers more market share has triggered a stampede, with all kinds of service providers clambering to gain CSS status.

From what we’ve seen, experience levels and resources vary; providers only need to create a website with a few product feeds to participate. The search engine does all the heavy lifting (e.g. query/product match).

We’ve noticed dozens of new CSS providers popping up recently. This underlines one of Google’s major arguments against the European Commission — comparison shopping providers have very thin value in this game. For this reason – and after seeing what’s now happening in the market — I don’t believe the ruling makes much sense.

CSS partnerships

Google’s new scheme includes one, small visual update. Comparison shopping service generated ads are self-titled. It’s worth pointing out those who click on the ad itself arrive directly on the retailer’s site; the less than 1 percent of shoppers who click on the “By X” land on the CSS website.

The ads only show as product listing ads (PLAs) in the search engine results page (SERP), not on the Shopping tab.

It’s a simple process for retailers, who mirror their shopping campaigns in a separate CSS account. They allocate a smaller part of their budget to their regular accounts (which largely show ads within the shopping tab) and spend the bulk within the CSS account to capitalize on the lower CPCs and additional incentives.

Too good to be true?

As with any get rich quick scheme, if you only chase big rewards you may fail to see big risks. Google Shopping suffers from a big problem: scarce data. Most products lack adequate historical information to ensure the right bidding decisions.

It’s a slippery slope. Buoyed by a kickback windfall, retailers split their data more and more thinly so both CSS’s and regular campaigns end up with just a fraction of information to work with. A chaotic, myopic process ensues. No one knows how their competitors have behaved. Even if CSS and regular campaigns are managed by the same party, without careful alignment of bids, companies compete against themselves.

Bids for the CSS and the regular campaign are based on just a small bit of data, whoever bids highest, wins. Realizing they were trumped by a rival, the losers raise their bids in response. Quite often this decision isn’t made intentionally but unconsciously, by bidding tools which can’t see the big picture. In almost every conceivable case, a bidding war begins.

As far as Google is concerned, it’s a democracy. Only one ad can be served for a specific stock keeping unit (SKU) and a specific retailer; it doesn’t matter how many comparison shopping services participate in the auction. Theoretically, all may bid to serve an ad on behalf of the same retailer and enjoy an equal chance of making it to the SERPS.

According to Google’s school of thought, the program’s structure keeps CPCs down for retailers. Theoretically, none of the parties placing bids on behalf of the retailer determine the CPC in the second-price auction.

This may describe the auction design correctly. But it proves the exception rather than the rule and hides the big downsides to almost any other realistic example. In practice, auctions see huge diversity. Parties working on your behalf will place a wide range of bids. Competitors who don’t work on behalf of the retailer will determine the CPC in the auctions.

In the example below, CSS B is still placing the same bid as in the previous example. However, Merchant 1 will now pay 400 per click – 20 times higher than before – all because CSS A placed a very aggressive bid.

Getting it right

How can retailers navigate these challenges? The secret lies in their choice of provider. Getting it right means aligning all available data, and this can only be achieved when a single party manages both their standard Shopping and CSS accounts. You either let your CSS provider manage both the regular campaigns as well as the CSS campaigns, or you let your agency manage both if they have a CSS offering.

In normal circumstances, a partner manages campaigns by using partial data at the account-level. A successful CSS program requires the partner to aggregate product-level data across multiple accounts (the regular direct accounts as well as one or multiple CSS accounts). With the right technology in place, their bidding algorithm should be able to operate on cohesive – not siloed – data.

At Crealytics (my company), we built a custom comparison shopping service bidding solution for this purpose. It calculates a single bid per SKU, based on all available data and across different accounts. This allows retailers to maximize the kickback while avoiding CPC inflation. Retailers who ignore these points risk overshadowing any kickback-related gains.

The post A kickback stampede: Why Google’s EU comparison shopping program may carry risk appeared first on Search Engine Land.

PPC’s #fakenews: 9 ways bad numbers might look good /ppcs-fakenews-9-ways-bad-numbers-might-look-good-301881 Wed, 11 Jul 2018 18:38:00 +0000 /?p=301881 Did you know it’s super-easy to fake your way to seemingly good PPC performance? Contributor Andreas Reiffen shares multiple ways to arm yourself against the pay-per-click version of “fake news.”

The post PPC’s #fakenews: 9 ways bad numbers might look good appeared first on Search Engine Land.


Today’s news landscape makes it tough to separate truth from fiction. Between news hackers and special-interest propaganda, it’s harder than ever to find a source of information you can trust.

Search marketing should be immune to it all. After all, we’re a metrics-first business. We count clicks, we tally conversions and we do difficult math to calculate return on investment (ROI). But did you know it’s super-easy to fake your way to seemingly good performance, due to rookie errors, bad habits or dirty tricks? Here’s how to arm yourself against the pay-per-click (PPC) version of “fake news.”

Rookie errors

It makes sense that less experienced people are more likely to make mistakes. A junior employee just doesn’t have the same level of knowledge; plus, they are hungry to make an impact on the business and their boss. These types of errors are understandable but potentially damaging to campaign health.

Comparing CTR

The higher the click-through rate (CTR), the better. It signals a well-written, relevant ad. But because different factors influence this metric, seeing CTR as a simple before-or-after rate offers little value. To show that CTR is really improved, you must take everything else into account.

In the example below, it becomes clear that CTR is more influenced by the distribution between Google Search versus Search partners and Top versus Others than it is by the ad text. To understand the impact of ad text changes, compare only [exact] keywords on the same position on the same delivery type.

No neutral tracking tool

It’s your first presentation and the client wants revenue figures. You pull them from AdWords, right? Wrong! To avoid inflated numbers (which AdWords can be guilty of), more experienced managers will use a neutral tracking tool like Omniture or Google Analytics. The same issue happens with Criteo and Facebook tracking: Businesses will see disproportionately high revenue numbers.

No peer group benchmarking

Congratulations! It’s golf season, and your sportswear client grew its sales by 20 percent. Unfortunately, you didn’t take Google benchmarking into account. It turns out that your client’s competitors grew their sales by 40 percent! Fail to normalize the numbers, and you risk painting an inaccurate picture.

Rookie errors are one thing, but this section covers lies of commission, habits and practices on tactics where PPC pros should simply know better. Regardless of the intent behind it, advertisers should keep an eye out for these deliberately skewed representations.

  • Mixing brand and non-brand in reporting. I’ve previously mentioned the sometimes-complex relationship between digital marketing and other marketing channels. Because finance dictates the budget, PPC managers may choose to impress them by engineering overly positive performances. Mixing brand and non-brand terms is a good example of this. Generic queries don’t perform as well as their branded counterparts and drive fewer conversions as a result. But they constitute most of the budget. Reporting a blend of both offers an easy (but flawed) “way out.”
  • Blending different markets together. When it’s results time, it might be that different markets vary within the same region. Beware of lazy reporting, bundling everything into one. The blended market makes things look good… at the cost of the true picture.
  • Adding a high share of existing customers. Mature businesses boast plenty of existing customers, but their customer acquisition share remains low. Cynical marketers have a cunning plan to gloss over this. Using remarketing lists for search ads (RLSA), they mix retargeting into PPC traffic.

While focusing on existing customers, return on ad spend (ROAS) skyrockets but new customer acquisition dies off. ROAS isn’t the best metric to optimize for. It won’t tell the whole story.

Percentage sorcery

Let’s have a look below at a retailer’s campaign statistics for the “shoes” category. The PPC campaign grew slightly — by 3k — but isn’t really moving the needle.

Our rogue marketer must now report on performance for “shoes” in the PPC segment (for Jan., Feb. and March). Month over month, it compares poorly to the “total marketing” segment:

Instead, they highlight the total growth rates as shown below. Fantastic! PPC growth rate is now 96 percent higher than total growth rates when you compare 7 percent against the 13 percent!

But now, a reality check. This is based on a measly uplift of just $3k revenue representing less than 1 percent of the overall revenue.

Questionable time frames

Skewing the baseline is another way to mask poor performance. Shortcut marketers simply compare a key performance indicator (KPI) from the last five days to a historic equivalent. It could be five days earlier, it could be a 30-day average before that. Whatever makes it look good enough.

By extension, a marketer may deliberately avoid year-over-year comparisons.

In this example, the marketer highlights a healthy Q2, 2018. Here are some things to note:

  • A dip in April precedes a successful May and June.
  • June grows by 65 percent against April.
  • June grows 40 percent against March.

Compare the revenue year over year and the perspective changes. In this context, Q2 doesn’t look that great after all.

Vague brand-term reporting

On pure brand terms, CPC and impression share represent the only decisive metrics. For marketers, brand growth may not boil down to performance. We can point to things like seasonality or an ad on television. Those looking for extra glory may ignore this fact, offering high-level reports. If you hear “brand revenue increased by x percent,” you should probably question it.

There’s an old quote that seems appropriate here:

There are three kinds of lies: lies, damned lies and statistics.

The best way to arm yourself against all three is to question the underlying assumptions behind the data. Ask yourself, “Does it pass the sniff test?” Check your own biases and look at the data beyond the dashboard reporting. Only then can you be sure you haven’t been a victim of one of these examples.

Want more info on Paid Search? Check out our comprehensive PPC Guide – Nine chapters covering everything from account setup to automation and bid adjustments!

The post PPC’s #fakenews: 9 ways bad numbers might look good appeared first on Search Engine Land.

Beating remarketing addiction and testing for incremental value using Google Analytics /beating-remarketing-addiction-and-testing-for-incremental-value-using-google-analytics-298192 Wed, 16 May 2018 15:52:00 +0000 /?p=298192 Trying to kick remarketing ads? Contributor Andreas Reiffen discusses why you may be addicted to them and explains how KPIs directly aligned with business objectives help avoid the vicious cycle that comes from ROAS-based objectives.

The post Beating remarketing addiction and testing for incremental value using Google Analytics appeared first on Search Engine Land.

Pay-per-click (PPC) managers love remarketing. At first glance, this makes perfect sense, as it enables marketers to easily improve their return on advertising spend (ROAS) numbers.

But is it really money well spent?

Recently, we performed an audit for a multinational retailer with a specific concern: improving new customer growth. Things developed nicely, there were no performance issues and ROAS more than doubled as spend increased slightly.

But digging a little deeper, we noticed something interesting. Remarketing lists for search audiences (RLSA) traffic had spiked and retargeted website visitors accounted for half of all sales.

That wasn’t all. While ROAS picked up, their new customer rate decreased by almost 50 percent!

Obviously not what finance asked for. This unwelcome reality means a slow growth of new customers leading to limited repeat purchases and shrinking future profits.

So, what does this mean? If you win few new customers, it stands to reason that you can expect fewer repeat customers.

Retargeting known users ostensibly yields great performance. And using RLSA helps marketers reach ROAS targets effortlessly. It looks seriously good on paper, but could it be too good to be true?

How RLSA helps

We set out to discover how much RLSA helps with new customer acquisition and overall incremental revenue — the metrics that truly matter to your business.

Besides the fact that it tends to address existing customers only, retargeting raises another concern: Does RLSA truly have a bottom-line, incremental impact? Ask yourself:

  • How many people would have bought my product without seeing the retargeting ad?
  • Which portion of my paid performance would have come in through non-paid channels anyway?
  • Am I paying for traffic that I might have received without any advertising pressure?

In the example above, it becomes clear that if RLSA were 100 percent incremental, we would add to our total revenues. If it had no incrementality at all, we would (silently) just decrease the share of unpaid traffic.

The reality falls somewhere in between. Before saying more, let’s examine our industry’s addiction problem.

The industry can’t get enough of retargeting

Why is there such an addiction to RLSA in our industry?

It’s simple: A higher RLSA share means performance marketing managers can present their bosses greater numbers and higher revenues at the same cost. For agencies, it means higher budgets and perhaps even higher fees.

And so it stays as popular as ever. Retargeting has also proven addictive for the search engines; they get premium prices for clicks that occur through the channel.

Let’s take retail e-commerce as an example. According to Alphabet’s earnings report, this sector accounts for around 24 percent of total PPC revenue. And an estimated 30 percent of retail spend comes via RLSA — generating several billion dollars in earnings for Google in 2017.

Taking this a step further, we tested enhanced cost per click (ECPC) against manual bidding. Base bids on RLSA increased by 23 percent against the control group. Compare this to a mere 16 percent increase in non-RLSA cost per clicks. Viewed more closely, this increase came from lower-funnel audiences (such as cart abandoners) which show the best key performance indicators (KPIs).

Effective cost per click is machine-learning-oriented. It optimizes the KPIs you provide via AdWords, and these numbers don’t reflect incrementality. Because of this focus on ROAS, Google tries to achieve what you ask it to do, which, in turn, makes you retarget more and more.

KPI pointers

When it comes to your target KPIs, a couple of pointers can help.

  • Stay skeptical. If results appear too good to be true, they probably are. This holds especially true when evaluating for ROAS (Remember, it doesn’t consider new customer acquisition or incrementality). Be mindful that you automatically retarget users anyway, even without RLSA.
  • Administer an incremental health-check, since appearances can be deceptive. The better the numbers you see, the less incremental they may be. Why not test for yourself? Measure incrementality for RLSA, brand and PPC as a whole.

This chart shows the relation between incrementality and ROAS.

Why does high ROAS usually mean low incrementality? Because you’re targeting users who already know your shop and are likely to have bought there previously.

These users naturally have the highest conversion rates. But they’re also the ones most likely to return through unpaid channels. So why spend extra budget?

Testing incrementality using Google Analytics

So, what are some best practices for testing the incrementality of RLSA? We recommend using analytics data versus engine data; this controls for the natural cannibalization often found in AdWords.

Set up an A/B test with ad exposure as the variable to test:

  • A random split of your audience is best practice for assuring test reliability.
  • All other factors being equal, any differences in the groups’ results can be attributed to ad exposure.
  • Set up random audiences using Google Analytics’ user bucket dimension.
  • Export your Google Analytics audiences to your AdWords ad groups and collect data for search eligibility.
  • Add lists to campaigns to reflect test design… and activate.
  • Measure results using Google Analytics’ custom report.

Step 1: Randomizing website visitors as test and control group

In Google Analytics’ Audience Builder, select “Condition” followed by “User Bucket”:

Next, set up your desired group size. For two groups of the same size, assign 50 buckets each.

Step 2: From Analytics to AdWords

Finalize your audiences in Google Analytics and export. The “Users over last 7 days” option will give you an initial impression of the list’s volume. Assign a name to the audience (see 2) and finalize the Audience Builder work by choosing a destination for the audience — in this case, AdWords (see 3).

Once in AdWords, you’ll have to wait for the list to collect enough users to become eligible for search retargeting (see 4).

Step 3: Combine each existing list with randomized audiences

To differentiate between Treatment and Control groups:

  • Once your audiences have collected a sufficient volume of users, you can start the test.
  • Add both lists to your campaigns. Next, exclude the users of one list from AdWords, while boosting the other by 100 percent.
  • To avoid cannibalization, no other lists should be active. Important point: If your current performance relies heavily on RLSA, the test should not run on the entire account, as it is likely to impact efficiency. In this case, choose a few campaigns that have high traffic — but not enough to hurt overall numbers.
  • If the entire account is low-traffic, ensure you free some extra budget without a ROAS target assigned to it to run the test.

Test several audiences from different stages in the funnel. If your account has lots of traffic, you could test the incrementality of different types of users.

To do this, create combined lists in AdWords, consisting of an audience such as basket abandoners, or buyers plus one of the random lists from Analytics. Then proceed as described above.

Step 4: Measure results with a custom report in Analytics

Set the “Primary Dimension” as “User Bucket.” Then, aggregate bucket performance into test groups and compare performance by several KPIs (such as Transactions, Sessions, Users or Page views). The lower the difference between the buckets excluded from AdWords and those that were pushed in AdWords, the lower the incrementality of RLSA.

Tips and tricks to remember


  • Make sure you have enough data.
  • Enable Remarketing Data collection in GA (Google Analytics).
  • Use -90% BM for Shopping, since negative audiences are not available.
  • Avoid bid cannibalization by pausing existing RLSA lists. If you test several types of audiences, exclude these from one another — such as cart abandoners from site visitors.


  • Mistake RLSA boosts for retargeting. As long as you don’t exclude all site visitors from all AdWords activity, you will always automatically retarget some users via PPC.
  • Get too granular with your lists. To generate a valid amount of data, they should have a decent volume of users and traffic.

Brace your organization for change

Pursuing high incrementality and long-term profits means accepting lower nominal ROAS numbers (as a result of fewer eligible conversions). Keep in mind that it’s just numbers! You’re not losing out on overall revenues for your business.

Getting your data science and finance teams involved early is a smart way to set expectations across the organization. Considering KPIs that are directly aligned with business objectives, such as profit or customer lifetime value, helps avoid the vicious cycle that comes from ROAS-based objectives.

And your finance team will likely embrace this way of thinking — it’s how they get measured for success!

When you first started out using search, driving efficiency was a valid strategy. But while optimizing, it pays to be proactive… think about the next strategic step-change. New, better KPIs offer a perfect example, as they open new room for growth.

The famous management consultant Peter Drucker said something quite applicable to our over-reliance on RLSA:

There is nothing so useless as doing efficiently that which should not be done at all.

Elevating your thinking on measurability and data accuracy creates a healthier and more profitable enterprise and ultimately leaves your campaign efficient and effective.

The post Beating remarketing addiction and testing for incremental value using Google Analytics appeared first on Search Engine Land.

Giants collide: Google’s impending showdown with Amazon /giants-collide-googles-impending-showdown-amazon-294689 Thu, 22 Mar 2018 20:41:00 +0000 /?p=294689 With Google pushing into retail and Amazon into advertising, where should you invest your ad budget? Contributor Andreas Reiffen says it depends on what you're selling and how much you have to invest.

The post Giants collide: Google’s impending showdown with Amazon appeared first on Search Engine Land.


Talk in the digital advertising space has traditionally focused on Google and Facebook’s digital duopoly, each fighting for more of your advertising dollars. But for Google, the real competition is coming from elsewhere.

Over the last few years, Google has pushed further and further into the retail space. Simultaneously, Amazon expanded its advertising platform. In the end, they both want the same thing: your purchase intent.

As these two tech behemoths continue to get more and more similar, who will lead? And more importantly, what should you do about it?

Google’s bid to become a retailer

We’re used to thinking of Google as a search engine. For advertisers, Google’s search engine results page (SERP) is composed of text and product (Google Shopping) ads. The popularity of these product ads with digital marketers has grown in popularity. According to Adobe’s Digital Index, spend on Google Shopping has surpassed that of text ads in every market.

This is especially true when it comes to mobile ad spend, for which, in our clients’ experience, Google Shopping is far more profitable. In the UK, for example, they spend around 74 percent of their mobile ad budget on shopping ads.

Due to the popularity and success of its shopping ads, Google has made a significant investment in their placement and prominence. The search giant’s rationale appears to be that product ads are more attractive to users, generating more revenue for retailers (and thus themselves).

It then threw a “purchase on Google” option into the mix. This allows shoppers to complete their purchase without ever leaving the Google interface. Google also started surfacing product ads for super generic search terms. This indicates that it wants to own the browsing aspect of online shopping, in addition to the purchase.

The convergence of two giants

So, what is the net result? As Google has continued to experiment with ever larger and more prominent placements for its product ads, the product SERP has taken on a startling resemblance to an Amazon results page.

The two business models look set for a collision course. With the addition of a buy button, Google is attempting to become a retail marketplace. Amazon, on the other hand, has been expanding its sponsored product ads offering, which looks increasingly like an advertising platform.

Even more telling, Google just announced what it calls Shopping Actions and Google Express.

Shopping Actions allows users to create a universal shopping cart of items from multiple retailers. Once they have a full cart, shoppers then use a Google-hosted checkout service to pay for all items at one time. Google Express is their new foray into last-minute delivery. This is undoubtedly Google taking another swipe at Amazon and other large retailers’ market.

Both Amazon and Google have invested in logistics, too, although with an important difference.

With its recent leasing of 40 cargo planes and a $1.5 billion investment in a cargo hub to keep them, Amazon is opting to own logistics from end to end. It has also invested heavily in delivery technology with a fleet of drones and a doorbell that grants the delivery man access to your house.

Google is comparatively late to the logistics game, but it’s certainly making up for lost time. Under its holdings firm, Alphabet, it has purchased no fewer than eight robotics companies in the last few years.

Owning self-driving pioneer Waymo puts Google among the strongest players in the driverless car industry.  True, it hasn’t turned this technology to last-mile delivery just yet, but it seems like a logical next step given Google’s trajectory.

What does this mean for you?

The reality of today’s online advertising world is that it’s less a choice between online ad providers and more of a choice about how much to invest in each provider. That decision will likely differ depending on your market, product offering and whether you have your own brand or sell several.

As with anything in the performance marketing arena, you need to afford a good amount of trial and error before working out which channels work best for you. The most important thing when running your tests is to make sure you account for incrementality and margins. This will give you a true understanding of a campaign’s impact on your gross profits.

That said, it can’t hurt to offer a few tips for advertising on Google and Amazon. We picked these up through experience and extensive testing:

Advertising on Google

Our number one tip for advertising on Google is to ask yourself, “How different is our price for the same product from other vendors?” Our tests consistently show that the retailer with the cheapest price outperforms all the other vendors.

Changing the price of your products can massively impact your product listing ad (PLA) performance. For one retailer we work with, just a 5 percent price increase coincided with a 60 percent decrease in clicks.

On average, we found that lower-priced products drive performance across the board.

Depending on your situation, you may or may not be able to change prices to reflect the market on Google Shopping. Even if you can’t, you can use price competitiveness as a signal in your bidding.

For example, you wouldn’t want to bid very much (or anything at all) on products for which you aren’t competitive.

Advertising on Amazon

As a brand or retailer, making the decision whether or not to advertise on Amazon can be a tricky one. If you do decide to sell on the Amazon Marketplace, or to Amazon directly, you should take advantage of Amazon’s most popular pay-per-click (PPC) ad type, the Amazon Sponsored Product ad.

In contrast to Google Shopping, Amazon allows advertisers to decide which keywords they want their products to appear for on its search results page.

This gives back one of the most valuable options for advertisers: They can set up different campaigns for brand, generic and competitor keywords and test for sales incrementality.

The post Giants collide: Google’s impending showdown with Amazon appeared first on Search Engine Land.

Side by side: Comparing two performance marketing tools/agencies /side-side-comparing-two-performance-marketing-toolsagencies-290221 Wed, 24 Jan 2018 17:24:42 +0000 /?p=290221 Columnist Andreas Reiffen shares his method for testing two different AdWords tools/agencies -- a process which could also be applied to test different campaign strategies.

The post Side by side: Comparing two performance marketing tools/agencies appeared first on Search Engine Land.


As performance marketers, we’re conditioned to want to test everything. From the impact of feed titles to the incrementality of each channel, we want to be sure that we’re making the right choice before we commit all our resources to something.

That goes for deciding which tools/agency to use as well. Moving your performance marketing activities from one tool/agency to another (or picking one to start with) is a big commitment and not one you should take lightly.

Most (probably all) tools/ agencies claim to do the same basic thing: improve your campaign performance. The way they do this or the methods they use will differ, but with so much choice out there, how are you to know which one will actually deliver?

To help them make the right decision, many companies will ask for a side-by-side comparison test between two tools/agencies. We were recently asked to participate in a split comparison test with another product ad technology for a multinational sports retailer.

Using this latest test as an example, I want to provide you with a few tips and recommendations for conducting an accurate side-by-side test and getting the best results.

Have a clear test purpose

No matter what kind of test you’re conducting, you need to make sure you have a clear goal in mind, i.e., a question that can be answered in a definitive and measurable way.

In this case, the retailer wanted to know if replacing their existing tools with our Google Shopping solution would impact their bottom line. To measure the outcome, they set a fixed ROAS target for each tool to meet and measured the resulting revenue figures.

Split test groups evenly

As always, a good test setup is critical if you want the results to be accurate. I’ve written before about different testing methods, but the simplest way to approach this type of test is to think of it like any other A/B test.

The key is to divide up your campaigns as evenly as possible so there are no extenuating circumstances that might explain one tool’s/agency’s success or failure. There’s no silver bullet here. The best way to split out your campaigns will likely differ depending on your products, the season, shopper demographics and so on.

The important thing is to try to keep as many variables the same as possible. You especially want the current and recent historical figures to match up — e.g., roughly the same traffic, conversion and ROAS numbers.

The two most common ways to split up campaigns are by time or location. For a time slot split, you would duplicate the campaigns you want to use in the test, assign one to each tool/agency, and then activate them on a fixed order of rotation (usually hourly). The benefit of this method is that it does away with any seasonality or geographic concerns by ensuring that each tool/agency is working with exactly the same product range over the same date range.

For a geo split, you take one of your regions, duplicate the campaigns there and divide into two sections with roughly the same impression and conversion volumes. Then, you assign each tool to one half of the territory and switch them every two weeks. The benefit of this sort of split is that it is technically quite easy to set up and allows you to adjust the scheduling during the test, as well as run promotions. It also allows each tool/agency to work simultaneously, which cuts down on the amount of time you have to run the test.

Technically, you can split your campaigns up any way you like. As I said before, there is no one way to split out your campaigns. It all depends on how your campaigns/territories/customers/products are distributed. You just want to make sure to get the two campaigns as close as possible.

Run your test for long enough

Testing — no matter what kind — costs money. As time goes on, you run the risk of wasting money on a strategy that isn’t working or isn’t working as well as another strategy. On the flip side, you need to run your test for long enough that you get a clear answer as to which test group performed the best.

A 50/50 testing split means dividing your total traffic in half. You want to make sure the variants receive statistically enough traffic within the time set aside for the test. How long this actually takes will depend on how much traffic you get and how obvious the difference is.

Keep in mind that any tool (or agency) that uses algorithms and machine learning to manage bids will need a certain amount of time to collect data before the automation will really live up to its potential.

In this case, the retailer ran the test for 10 weeks. You can see that the differences between us and their existing tool were very slight in the first half of the test while the algorithms collected data. But by the second half of the test, we had collected enough data for the automated bidding and the new campaign structure to take effect, creating a much more pronounced difference between our two approaches.

Another important thing to remember is to wait another two weeks after the test finishes before collecting and evaluating the results so that you capture any latent sales.

Evaluating the results

Before the test, make sure you set clear, achievable goals. The goals you want your candidates to reach need to be achievable for the parameters (e.g., budget, time, season, product skew) you set alongside them. You can’t expect the moon just yet.

Now is also not the time to get fancy with your metrics. Set the same type of targets you’ve been using so that you can easily compare the new results to your baseline data. Once your test period is up, you can compare the results from each test/agency against your benchmark KPIs.

In this case, the most important metric for the retailer was who could drive the most revenue within the target ROAS. However, they also looked at other metrics like Cost and ROAS.

You’re not necessarily looking for the hard numbers; what’s more important is the difference between the performance of the two tools/agencies and how that compares to your baseline. In this case, our competitor was already the established tool, so there was no need to plot the baseline.

If you’re evaluating agencies, or if the tool you’re looking at has a customer support arm, ask them to give you their take on how the test went. Not only will this provide you with some color commentary for the raw data, but it also will give you an idea of what the working relationship with that company will be like going forward. For example:

  • Do they make a lot of excuses?
  • Do they have good reporting techniques?
  • What is their plan for the future of your accounts?
  • What adjustments would they recommend you make to your accounts?

You want to work with a company that will push innovative ideas and new strategies, not one that will sit back and perform the same repetitive tasks over and over.


There are lots of things you can do during testing that may skew the outcome. In addition to setting up a good test, here are a few recommendations for not biasing the results.

First of all, it’s generally not a good idea to change much of anything during the testing period, but you’d be surprised how often this happens.

Budgets, and how each agency or tool allocates them, are a key part of what you want to evaluate during the test period. Similar to the budget scenario, changing the attribution model or the target KPI will hugely affect the results of the test thus far.

Check in on progress. You don’t want to change things unnecessarily, but you should still be checking in with the results as they’re available to make sure your test structure is working. If something looks radically off in the first couple weeks, it probably is. In that case, you may need to rethink your testing structure and make some changes. Any changes you make will likely extend the end date for the test as well.

Hopefully, this example will provide a guide for your tool/agency evaluations. 2018 is going to be a big year for retail, and it’s essential that you cultivate the right tools for the job.

If you’re seeking every advantage that will propel your retail business to the top, you can’t miss our rich retail-focused content at SMX West. Join us March 13-15, 2018 in San Jose, CA for three full days of SEM and PPC sessions specially designed for e-commerce pros like you. Major players, including Google and Bing, will be there to share firsthand advice that can help you effectively reach your customers and increase overall sales. Together, we will dive deep into the areas of online retail that matter most to you and your bottom line.

Register now for the best rates — early bird pricing expires February 3!

The post Side by side: Comparing two performance marketing tools/agencies appeared first on Search Engine Land.

The vicious cycle of ROAS targets is killing your business /vicious-cycle-roas-targets-killing-business-287275 Wed, 13 Dec 2017 17:28:28 +0000 /?p=287275 While many companies focus on return on ad spend (ROAS) as their primary KPI for search, columnist Andreas Reiffen believes that ROAS targets can often inhibit growth and new customer acquisition.

The post The vicious cycle of ROAS targets is killing your business appeared first on Search Engine Land.


Your marketing team is hard at work tweaking ads and landing pages to drive efficiency and hit the targets set for them by the C-suite. And those targets are more than likely ROAS-related.

But, for two reasons, these ROAS targets are actually causing a lot of damage:

  1. ROAS usually doesn’t take incrementality into account, which incentivizes marketers to turn on retargeting or brand campaigns to meet their targets while hardly generating any tangible results.
  2. It sets incentives to sell more low-margin products to mainly existing customers because this type of second-class revenue is cheaper to get.

If, like most companies, you’re focused on growth and new customer acquisition, you need to ditch ROAS-based KPIs, come up with a new metric and include incrementality before it’s too late.

This is what you get if you ignore incrementality

When we talk about “incremental sales” as a digital marketing KPI, we’re talking about how much a specific marketing campaign or channel contributed to increasing sales revenue. So, if a search or shopping ad led to a sale that wouldn’t have happened otherwise, that’s an incremental sale.

Return on ad spend (ROAS) takes into account purchases from users after clicking on an ad. At first glance, that sounds reasonable. It seems like that measure would tell you how good an ad is at driving revenue.

But what ROAS usually doesn’t tell you is whether or to what extent those sales would have happened anyway (without showing ads). In other words, ROAS doesn’t account for incrementality.

Imagine you’re shopping for high-priced luxury products; you put them in the shopping basket, but then decide to wait another few days to think about whether it’s worth spending the money. Then you see your favorite products following you all over the web, and at some point, you’re intrigued to click through. Finally, the day after, you buy. This happens hundreds of thousands of times every day.

Our industry now understands — much better than a couple of years ago, at least — that a significant number of these people would have bought the items anyway, even if they hadn’t seen the ad.

You’re probably thinking, “OK, sure, but how big a deal is incrementality, really?” It turns out it’s quite a big deal. Based on our internal client testing here at crealytics, we’ve found the following:

  • If you’re a multibrand retailer (e.g., Kohl’s or Staples), brand searches will usually drive no more than 1 percent incremental sales.
  • Display retargeting often hovers around 5 percent incremental sales when tested properly.
  • Search retargeting rarely gets higher than 20 percent incremental sales.

Channels that drive the highest number of incremental sales are also generally more expensive. So, if you set ROAS targets without taking incrementality into account, marketers will have to look for cheaper sources of revenue. Usually, they will see themselves in a situation where “Search Brand” is already split out and treated separately because of the obvious lack of incrementality. So, where do marketers find the revenues they need?

The revenues which are least incremental are usually the cheapest, and therefore, marketers often try to increase the volume of display or Facebook retargeting first. Search retargeting is also a great way to hit targets without really having a substantial impact on the business. And the best part about search retargeting is that it’s hidden in the overall search numbers — you have to really zoom into AdWords to see what percentage of the revenue is coming from people who might have bought without spending ad money.

The vicious circle of ROAS targets

Let’s assume you’ve tested the incrementality of your most important marketing channels, and you’re factoring in the findings when measuring the success of your campaigns. Instead of setting traditional ROAS targets, you now refer to incremental ROAS.

In this case, ROAS should no longer be an issue, right?

Sadly, no. In reality, it’s still a big issue which silently destroys performance even at some of the savviest retailers.

How performance marketing targets are set

In most retail companies, marketing budgets are set by finance looking at the historical performance of past advertising campaigns. They know ROAS is a bad indicator for bottom-line profitability, so they go ultra-granular, take the numbers from some internal tracking system — usually based on last-click attribution — and analyze the profitability of every single order, taking into account contribution margins after COGS, shipping, packaging, payment costs and so on.

If bottom-line profitability differs from the internal financial planning, ROAS targets and budgets are adjusted accordingly. Marketing is then incentivized to hit the new targets while not exceeding the budget constraints.

What marketers will do to hit their targets

In order to hit these ROAS targets (including incremental ones), performance marketers will tend to sell more low-margin products to mainly existing customers because these sales deliver the best ROAS.

One simple way to sell to existing customers is by using Customer Match to target known customers. If revenue is the criterion and not margin, bidding systems will automatically allocate the budget where revenue can be found at the cheapest price. Areas of the assortment which have low margins will look better because there is usually less competition.

So, what happens in the next budgeting cycle? Finance will again zoom down to the most granular level, take all the orders and analyze profitability. They will notice that for some strange reason, profitability and new customer rate are down again. As a result, they will tighten the ROAS target.

If you see ROAS targets in your company, it’s very likely that you could easily do much better. If, in addition, you hear that ROAS is not reflecting incrementality, you’re really missing out on a huge opportunity.

Setting better targets and testing incrementality

In order to set performance marketing targets that are beneficial to the bottom line, you first need to find the exact incrementality levels for each of your marketing channels.

Very quickly, incrementality tests are implemented by defining a test and a control group. The test group sees ads, the control group doesn’t. You then analyze the revenues generated by the two groups over time. Incrementality presumes that the test group that sees the ads will generate more revenue than the control group. How much more defines your incrementality.

Once incrementality levels have been established, marketing and finance can work together to align on which metrics they want to use to measure progress. I always recommend customer lifetime value (CLV) or margin.

By using a profit-driven metric, you remove the ability to hit targets by selling low-margin products; and by taking incrementality into account, you make sure that hitting those targets gets you incremental gains.

The only way to enable marketers to really drive what matters is to give them access to order profitability and margins in such a way that they can use them in their bidding tool. This will undoubtedly require some technical integration, but it will deliver an unparalleled return.

The post The vicious cycle of ROAS targets is killing your business appeared first on Search Engine Land.

The big problem with PPC attribution modeling no one is talking about /big-problem-ppc-attribution-modeling-no-one-talking-283276 Wed, 04 Oct 2017 16:06:30 +0000 http:/?p=283276 The question of which attribution model to use in paid advertising is a tough one to answer, and columnist Andreas Reiffen explains why.

The post The big problem with PPC attribution modeling no one is talking about appeared first on Search Engine Land.


Recently, I was asked to comment on my favorite attribution model for paid advertising. Off the top of my head, I didn’t really have one — and the more I thought about it, the more I realized that all attribution models are fundamentally flawed in a way no one is thinking about. Or at least, not that I hear discussed regularly.

The issue isn’t with the actual models, per se, it’s that no model really nails what the “total value” being measured actually is.

Let me explain.

The point of attribution

Marketing managers have a lot of options when it comes to how they spend their PPC budgets. Getting the right balance between online channels can be tricky because not every channel drives a direct sale.

We should…figure out how to accurately calculate the true value of each transaction before we go about attributing value to the touch point that led us there.

Some channels, like social media ads, are known as top-of-funnel channels — they target consumers early on in their purchase journey when they are simply browsing. The goal of these channels is to arouse interest in a particular brand or product even if a sale doesn’t happen in that instant. Conversely, other channels, like Google Shopping, are a lot more likely to drive a direct sale because consumers happen upon them much later in the purchase funnel.

If you focused your budget exactly proportionately on the channels that drove direct sales — known as “last-click” attribution — you would miss out on a lot of top-of-funnel marketing activities that drive awareness and desire for your products.

Attribution models were created so that we could assign due credit to the right channel for delivering a certain percentage of some “total value” that advertising brings. That way you would know how many resources to put into each channel for the greatest effective marketing strategy.

My issue with all attribution models is that they assign credits to touch points, which assumes an accurate measurement of the value of a transaction generated by advertising. Attribution models can help to estimate what percentage of the total value should be assigned to each touch point, but the total value itself is inaccurate.

What is the total value?

The most simplistic answer to this question is order revenue, which on its face makes sense — you want your ads to generate the maximum possible sales revenue. But in e-commerce, things are never that simple.

Using something as basic as revenue means that you are devaluing the impact of a lot of very important factors for e-commerce success.

For example, if you consider that every customer has a lifetime value, wouldn’t it be worth more to acquire a new customer (who has a full lifetime value ahead of her) as opposed to an existing customer (who only has a partial lifetime value ahead of her)?

A purely revenue-focused attribution model wouldn’t be able to make this distinction of value between customer types. It also ignores secondary effects like customer referrals, newsletter signups, social media shares and product reviews — all of which can have a big impact on sales.

There’s also the question of which stock item is sold via which touch point. We’ve spoken before about how not all product sales are created equal. Selling something that is taking up expensive warehouse space and of which you have an abundance is far more valuable than selling something that’s running out of stock quickly. Yet, when revenue is tracked, a full attribution value is assigned to both sales equally.

Revenue models also fail to assign different values to sales at the various product life stages. This is particularly problematic when you consider the sale of perishable goods (I include products with extreme seasonality and fast fashion in this category) because the value of their sale changes dramatically throughout their life cycle.

Finally, while two products may have the same price, they could have wildly different margins, making one of them a much more profitable sale than the other. But, since revenue and profit are not interchangeable, current attribution models can’t account for this difference.

Solve the value first, then optimize

My point is that there is a lot more to advertising value than pure revenue. Treating every transaction the same will eventually lead you to allocate the completely wrong value to the touch points in your customer journey.

Most discussions of attribution models talk about how we can tweak this metric or that to divide the allocation differently. But, if what you’re dividing in the first place is ambiguous, incomplete or wrong, the value of that division is completely lost.

I don’t claim to have the answer to what the total value should really be (yet, anyway). Even Google’s latest stab at data-driven attribution doesn’t take into account the intricacies of product life cycle management. But as an industry, I think this is the conversation we ought to be having.

We should put our brains and our models to work to figure out how to accurately calculate the true value of each transaction before we go about attributing value to the touch point that led us there.

I may not have been able to come up with my favorite attribution model, but I can say that any model is only as good as the action you are able to take based on its data. It’s essential that we create bidding systems that are able to “see” the attributed values and base bid adjustments on them automatically.

The post The big problem with PPC attribution modeling no one is talking about appeared first on Search Engine Land.

Breaking down silos in e-commerce retail /breaking-silos-e-commerce-retail-279884 Wed, 09 Aug 2017 13:54:35 +0000 http:/?p=279884 Want to improve your e-commerce sales? Columnist Andreas Reiffen makes the case for data sharing across marketing channels to avoid lost revenue and wasted ad spend.

The post Breaking down silos in e-commerce retail appeared first on Search Engine Land.


In general, e-commerce retail can be thought of in three main parts: customer acquisition, conversion and retention/measurement.

  • Customer acquisition is what the majority of digital marketers are concerned with. It’s all the different product advertising efforts across all digital channels that lead a customer to discover your products.
  • Conversion is about getting the customer to make a purchase. To a certain extent, this is about the experience your website presents a customer, but mostly it’s about whether you have the product the customer wants to buy and are you selling it at a price they are willing to pay. Conversion, therefore, has a lot to do with your price and inventory management.
  • Retention and measurement are about how much each new customer is worth to your business — i.e., how many purchases are they likely to make. This means integrating your CRM (Customer Relationship Management) and ERP (Enterprise Resource Planning) systems to make sure you are getting the maximum profit from your customers for your advertising spend.

Each of these three pieces of the tripod is equally important in order to have a successful business, and each one collects an incredible amount of data. Unfortunately, in the majority of companies, that data remains siloed within the team that collected it. Data is not being fed back into other parts of the marketing machine; as a result, revenue is being lost and ad spend is wasted.

I believe that in order to create a truly optimized marketing process, retailers need to integrate their product advertising efforts with inventory management and price competitiveness.

Let’s look at what can be achieved by breaking down these marketing silos.

Product advertising and price management

I’ve talked before about how if you fail at pricing, you’ll fail at product advertising. What this means is that Google (and other PLA networks) include price competitiveness as an algorithmic signal. According to our research, lower-priced products generate 4x more conversions than expensive products.

If your product’s price is above the average, it’s virtually impossible for you to reach position one, no matter how high you raise your CPC (cost per click) bid. Therefore, bidding higher (or at all in some cases) on products that are not priced competitively is a complete waste of money.

The key takeaway here is to not waste your advertising budget bidding aggressively on overpriced products. Instead, use product advertising performance as a way to test the price elasticity of your products. Play around with raising and lowering prices to find out where the optimum conversion level is.

Even if you can’t change your products’ prices, you should at the very least be using pricing data to inform your product advertising strategy. Focus your ad spend on those products that are competitively priced, and use them as a gateway to your website.

Product advertising and inventory management

Data silos and misaligned metrics between the advertising and merchandising departments can cause retailers to waste significant amounts of money.

In many organizations, excess budget is allocated to products that quickly run out of stock, rather than products that really need an advertising push to sell out. In fact, according to our research, around 40 percent of ad budgets are spent on products that will run out of stock within three weeks. On the flip side, only 21 percent of budgets are allocated to products that won’t sell out within three months.

The way I see it, there’s no sense in wasting money on products which will run out of stock anyway. Instead, PLA (product listing ads) budgets should be used to add market pressure to slow-moving stock that would benefit from additional support.

Using feed advertising systems and an advanced product advertising tool, you can use inventory to direct product ad investment based on your sell-through rate and stock level.

Product advertising and CRM/ERP

One reason too much ad spend is going toward products which would sell anyway is that digital marketers are focused on improving ROAS (return on ad spend). This ROAS model of measuring search marketing simply doesn’t work.

ROAS-based models focus on whether your advertising is efficient but ignore if that’s the most effective way of growing your business. ROAS is all about winning that impression, click or customer engagement, but it doesn’t take into account the ultimate business measurements of profitability, margins and new customer acquisition.

Instead, businesses should think more in terms of CRM and ERP. The big question is how much you should spend on all your advertising channels in order to achieve your total long-term revenue goals.

Our research shows that when advertisers focus on long-term revenue goals, they make about 5 percent more revenue in the first year than when they focus on short-term goals like ROAS.

Instead of ROAS, you should be optimizing all your paid activities for CLV (Customer Lifetime Value), or how much profit you will make off a new customer over the course of a given time frame (usually one year). This is where CRM comes in. The more profit you can get out of a new customer, the more you can spend on advertising to them.

ERP systems will help you to understand the margins of the products you are selling. You want to sell more of the ultra-profitable products by bidding against specific margin levels.

Knowing customer lifetime value and profitability at the product level allows for improved paid media placement, intelligence in product pricing, and better inventory optimization.

Integrated marketing

It’s impossible to make smart decisions if you don’t understand each piece of the marketing trifecta and how it relates to the other parts. In retail organizations today, these are all separate functions, but by aligning these channels to operate in concert, your whole business remains focused on a single set of mutually beneficial and coordinated metrics.

For example, in order to effectively sell excess stock, bids should be high, while prices should be equal to or lower than your competition. At the same time, you can avoid spending too much ad money on products that will sell on their own or run out of stock by selling them at a high price to maximize yield and reducing your ad spend.

In addition, by maximizing your ad spend for long-term goals like CLV, margin and new customer acquisition, you can improve the entire business’s revenue and profitability.

Ultimately, profit-centric metrics should be the modern retailers’ success criteria. As a C-level executive, your goal should be to shift your business’s way of thinking so that you align e-commerce goals of new customer acquisition, CLV and margin tracking alongside paid search and shopping media spend in a way that is transparent, accountable and integrated with your data warehouse.

The post Breaking down silos in e-commerce retail appeared first on Search Engine Land.