—  Your Benefits  —

In-Market Testing

Benefits of Mosaic Testing in Retail

In-market testing is not simply paint-by-numbers. Unique products, customers, promotions, and marketing objectives require a custom approach for every test. But there are also techniques that help consistently ensure more accurate insights and greater ROI. Both the art and science of testing – clear, bold test elements, techniques to enhance clarity of insights while we subdue market noise, plus efficient integration within on-going marketing and retail programs – all work together to maximize ROI.

Most tests include 3-25 marketing-mix elements. With mosaic testing, this means each store, e-mail, or ad includes a unique combination of every test element – some at the "control" or "business as usual" setting and some at the new, "test" setting. Testing many marketing-mix elements at once gives you four clear advantages:

1. Accelerated learning

Combining dozens of tests in one mosaic, you can speed your rate of learning without sacrificing scientific rigor. One mosaic test often requires greater effort than one champion-challenger test, but far less effort than testing the same ideas as a series of one-variable tests over months or years (versus weeks).

2. Small sample size

Ideas are the raw materials of testing – the tiles within the marketing mosaic. Sample size is like the size of the canvas – how large a picture you need to see the marketplace clearly. Variation distances you from the market landscape and clouds your view. The greater the distance, the larger the canvas you need to paint. You can't simply evaporate the fog, so you need enough sample size to see through it (the math is explained here).

The science of mosaic testing lets you see (and quantify) all test elements with the same sample size. Instead of a monochromatic picture, you can paint a more-realistic full color picture of your marketing mix, yet clearly see the impact of each element. The underlying structure of this mosaic – how the elements are mixed and pieced together – is what allows you to separate the impact of each change.

Unknown to your customers (or competitors), the marketing-mix is set up in a well-defined, unique combination within each store (or email, CRM program, etc.). It looks like one picture to your customers, but is created from a particular pattern. After collecting data, this allows us to separate the impact of each element while also assessing combinations of elements.

3. Accuracy and depth of insights

The multivariable test strategy parallels the complexity of the marketplace. By changing many variables in combination, we can quantify the proven impact of each element alone, plus interactions – where elements may have a different impact depending upon how others are set. This approach lets you quickly optimize the full marketing mix and ensure more valid estimates of the real-world market impact.

In addition, combining many marketing-mix elements within the same mosaic test allows you to reduce experimental error by 60-80%. Less noise better clarifies the picture of the marketplace and allows you to see small effects that would normally be lost in the clouds of uncertainty.

4. Testing return-on-investment

One of the greatest things about testing is the ability to prove your return-on-investment. After calculating the impact of each element, a simple A/B confirmation test of the optimal (with all significant variables combined) vs. the original "control" gives you a concise measure of the increase in response rate, dollar sales, and profitability. Subtract the total cost of testing and you have a simple equation for your ROI on testing. With mosaic testing, your savings in time and sample size, plus accelerating the rollout of the optimal combination, give you rapid payback and a measurable ROI.

 Artestry experts have worked with a range of programs and industry leaders to achieve quick, clear, quantifiable results from in-market testing...

  • In-store Testing
  • CRM and Multichannel Testing
  • Price Testing
  • Product and Marketing-Mix Testing
  • Social Media Testing
  • Media Mix Testing

In-Store Testing

In-store testing covers all areas of shopper marketing and the front-line retail experience:

  • • In-store Advertising: window, floor, and shelf advertising; signage, displays, and racks
  • • Product, packaging, and positioning: shelf set and merchandising
  • • Pricing and promotions: price points, sales and discounts, upsell and cross-sell, offer signage

For each test, a unique combination of elements is executed in each store (or group of stores in each market area). Each store is monitored to ensure the correct and consistent setup throughout the test weeks.

Sales are measured store-by-store, but summarized among a group of stores (usually 1-2 geographic markets) to assess natural variation. Then the difference between "test" and "control" stores for each element (and combinations) is calculated one or both of the following ways:

  1. Sales in each store during the test period versus predicted sales (statistical modeling of historical sales levels)
  2. Test store sales versus sales in a control group of stores (comparing sales over the same weeks, while eliminating predicted store-to-store differences)

Metrics usually include: total transactions (the number of buyers), the number of units sold (or total basket size), along with total dollar sales and margin. Statistically-significant marketing-mix changes are those that have a greater impact than the natural difference in sales levels among stores or markets. (This Interfaces article explains the approach in greater detail.)

Here are two examples of in-store tests...

Product sales jump 21% after one supermarket test

In-store tactics – displays, advertising, shelf set, promotions and pricing – can drive sales, but at a large cost. One firm sought the optimal in-store marketing mix to increase sales with the most cost-effective tactics. They tested 10 marketing-mix elements at once: 5 shelf set changes, 3 types of advertising, and 2 on-package promotions. The mosaic test included combinations of all 10 elements in each of 50 stores for two weeks.

Even with the minimal sample size for the test, results clearly identified the top 4 elements for a 21% increase in sales. Plus two costly shelf-set changes and two advertising tactics had no impact – so they could shift marketing dollars where they had a measurable impact. This scientific approach gave them clear insights while savings 6 months of testing.

Fortune-50 retailer increases sales over 4% after one 11-element test of weekly advertising

The Sunday circular remains an important advertising vehicle for national retailers. One Fortune-50 marketing team reviewed the layout and details of the ad, past tests, and competitors' ads to pinpoint 11 elements for one test: 6 creative changes, 3 discount and promotional changes (affecting both the ad and in-store sale items), and 2 market variables. These were combined into a 12-recipe mosaic test, with each recipe tested over four weeks in each of two markets.

Six elements had a significant effect on sales, basket size, or the number of transactions. When the optimal circular/in-store combination was run against the original version, sales jumped 4.1% in the test stores versus the control stores. Standard controlled-store testing would have required 17,000 additional stores—or another year of testing—for the same results.

CRM and Multichannel Testing

Direct mail, e-mail, Internet, and multichannel retention and CRM programs are ideal for advanced mosaic testing strategies. Direct-response metrics increase precision by linking each purchase to specific one-to-one contacts. Therefore, experimental error and sample size can be reduced, while tests can be more granular – focusing on specific elements of the creative, offer, list, and contact stream.

Multichannel and multi-touch tests are a bit more complex and time-intensive (controlling the marketing mix across multiple efforts), but the benefits are magnified as well: increasing scientific validity with small sample sizes and saving years of testing – by testing variables in parallel over months instead in a series of multi-month A/B tests. Results can be analyzed for each campaign, but the primary objective is increasing sales (and customer lifetime value) across the full series of contacts.

Here are a few examples of single-campaign direct marketing tests and CRM tests covering a series of multichannel efforts:

CRM test of 17 elements across 14 campaigns over 9 months (for a 15% lift)

The CRM director in a $4B company wanted to optimize a 14-touch multichannel program across three customer segments and four predictive models. Starting with 41 ideas, the Artestry team created two tests with 17 marketing-mix elements – changes to the creatives, pricing, offers, and contact strategy.

With innovations of mosaic testing, Artestry completed the CRM tests in 9 months (versus 3 years required for test-control techniques), quantified the effect of each element, along with interactions and curvature in the price effect. With 10 significant elements, the optimal contact strategy beat the 7-year control, increased response as much as 15%, and confirmed differences by customer segment and statistical model.

Multichannel test increases revenue 8.3%

A home goods manufacturer/marketer conducted a 4-month multichannel test of the number, timing, and mix of contacts. The optimal contact strategy showed a proven 7.8% lift in response (worth $ millions per year) while reducing the cost of testing by $400,000. Common test-control techniques would have required 2.5 years to achieve equal confidence.

Retail traffic increases 91% after one CRM test

An ad agency brought in the Artestry team to manage a CRM test to drive traffic through a direct mail and Internet promotion. 7 significant elements of the CRM campaign led to a 91% increase in customer traffic and nearly 3x increase in credit requests.

Co-branded direct marketing test increases response 16.2%

Visa ran one test in 6 direct mail programs with 4 partners to find the best way to present Visa Signature® benefits. 3 of the 12 test elements had a significant impact, increasing response 16.2% while pinpointing the best combination of messages and graphics.

Price Testing

Price testing is clearly important as the price sensitivity of customers and offers from competitors are constantly in flux. (But keep in mind... The obvious benefits of price optimization should not eclipse the benefits of creative and marketing-mix tests.) While many marketers compete on price, customer value encompasses many different variables: pricing strategy, mix of price points within the shelf set, discounts, displays and packaging, premiums, shipping, and the creative presentation of the value proposition and sense of urgency.

The simple concept of price testing underlies complex science and human behavior. For example, consider that...

  • Price is a continuous variable, but customers' perceptions are not. You can change price by one cent, from $1.98 to $1.99, $2.00, or $2.01, but the perceived difference between $1.98 and $1.99 can be different than from $1.99 to $2.00.
  • Price effects can show curvature along with stair-step patterns. Analyzing the impact of price on profitability, we often see curvature in the effect. The lowest price may maximize unit sales, a higher price increases margins, and a price point in-between often achieves the optimal balance of units and dollars.
  • Price does not stand alone. Perceived value is the sum of the price point, product benefits, the full offer, and the creative presentation of the offer. Often one goal of testing is finding ways to present the offer more attractively in order to maintain unit sales while increasing profit margin.

Here are a few focused price tests:

1 mosaic test of 6 price points + 6 offer elements

A "hybrid" mosaic test quantified price sensitivity and "curvature" in the price effect to pinpoint the optimal price point, along with 4 other elements that drove response. The optimal mix increased response 4.9% while increasing profit $3.50 per order.

CPG product mix price test: 81 possible combinations analyzed at once

A "central composite design" for a mosaic test of four 3-level elements allowed the Artestry team to quantify main effects, interactions, and curvature by testing less than one-third of all possible combinations. This complex test identified the optimal number and order of items, plus the starting price and price spread among items.

Ultimately, this type of test gives you the freedom to optimize all pricing / merchandising / shelf set elements at once, so you can you avoid sub-optimizing one variable in isolation (while at the same time, reducing sample size by two-thirds).

New insights from a simple 4-element offer test

A leading marketer ran a mosaic test of different prices, odd price points, a shipping fee, and a premium. Results showed that the additional shipping fee is profitable only when the premium is included in the offer. By testing combinations of variables, the marketing team gained new insights beyond what years of A/B testing had provided.

 

Product and Marketing-Mix Testing

Marketing programs are complex. Thousands of variables come together to affect customer perceptions, interest, and purchase behavior. Testing individual offers or product variables in isolation can ignore valuable interactions within the marketing mix. Many tests focus on the in-store marketing-mix at the point of purchase, or advertising and marketing programs to bring people into the stores and website, but another strategy is to take a step back and see how spend should be allocated throughout the full marketing mix.

Tests across multiple marketing channels can be very effective:

$25B retailer increases sales 5.1% after one 6-week test

With the high value of circular and in-store advertising, one global marketer asked the Artestry team to help optimize ad spend and ROI. After brainstorming over 300 ideas, the team focused on 18 marketing-mix elements for one in-market test of 11 circular and 7 in-store elements. The 6-week mosaic test reduced sample size by 90% and pinpointed marketing-mix changes proven to increase sales 5.1%. Significant elements included: circular page count, product mix, promotions and coupons; retail shelf and window signage, and in-store displays. Standard controlled-store tests would have required every store in the chain or a full year of testing to see the same results.

CRM test of 17 elements across 14 campaigns over 9 months (for a 15% lift)

The CRM director in a $4B company wanted to optimize a 14-touch multichannel program across three customer segments and four predictive models. Starting with 41 ideas, the Artestry team created two tests with 17 marketing-mix elements – changes to the creatives, pricing, offers, and contact strategy.

With the innovations of mosaic testing, Artestry completed the CRM tests in 9 months (versus 3 years required for test-control techniques), quantified the effect of each element, along with interactions and curvature in the price effect. With 10 significant elements, the optimal contact strategy beat the 7-year control, increased response as much as 15%, and confirmed differences by customer segment and statistical model.

 

Artestry often combines testing with broader analytics: testing statistical models and segmentation to better integrate analytics and testing.  This leverages the benefits of both “sides of the coin.” You gain new insights, more granular data, and proven results from testing.  These insights can confirm segmentation models, while adding new data to continually refine the statistical models and offer clean, detailed data for more effective data mining. 

Social Media Testing

The impact of social media is difficult to control and measure, but every marketing program needs some assessment of ROI. Social media programs can be combined together, or tested individually to optimize the add content and strategy. Controlling social media tests can be a challenge. Facebook, for example, allocates traffic inconsistently to ads, so a balanced test requires additional safeguards. Google also has some internal controls that can lead to unbalanced ad impressions and limited test runs within Google Analytics Content Experiments.

Here are a few recent tests:

Internet test uncovers three ways to increase Facebook "Shares"

One Internet test was focused on increasing retail traffic, but also tested Facebook messages and graphics within the 12-element test. Results showed that a stronger call-to-action and more detailed information up-front increased retail visits but reduced Facebook Shares. A simple sweepstakes graphic increased Shares (versus a more detailed list of prizes) and the text really didn't make much difference. The right combination increased Shares by over 25%.

Online advertising test increases Click-through and ROI

One test of Facebook advertising included segmentation variables – like age, interests, and location – along with different combinations of images, headlines, and text. Results showed that a more active image and detailed (though less interesting) copy increased click-through and ROI (while reducing clicks). Segmentation results also helped confirm and refine statistical models which were then re-tested in a second wave.

Paid Search Advertising test

Testing search engine marketing (SEM) via Google paid search ads can include pricing, placement, and creative tests. One simple creative test divided the four lines of the search ad text into four different test elements (headline, line 1, line 2, and URL). Using a mosaic test design, the marketer tested 8 ad versions as part of the same multivariable test, reduced run length by 75%, and found an interaction that would have been difficult to quantify with common split-run techniques.

 

Media Mix Testing

Market testing can focus on specific channels or marketing campaigns, or take a step back to assess the full media mix. In-store, product, CRM, and advertising programs can be combined into one test. Media or marketing-mix tests can also assess shopper, geographic, or competitive segments like:

  • Market-specific differences: small vs. large, urban vs. rural
  • Local store-specific differences: store maturity, sales volume, and/or level of competition
  • Media markets/outlets: mix of TV stations and media markets (schedule, frequency, reach, GRPs)
  • Customer-specific differences: loyalty programs, shopper marketing by segment, CRM tests

Media mix tests are similar to contact strategy (multichannel) tests in taking a broader view across programs, rather than testing numerous changes within one program.

Retailer cuts advertising costs more than 11% by optimizing the Media Mix

A retailer combined all national advertising programs into one test: frequency, reach, schedule, and cost of TV, radio, print, and direct mail. Testing different media mix for three months in a dozen different markets across the country, the company pinpointed the right advertising spend to increase sales while reducing total advertising budget. The mosaic test saved two years of testing while proving the impact of each channel and how multiple tactics could work together to drive retail sales.

 

Learn ways you can Get Started with Artestry or Learn More about the science behind mosaic in-market testing.