Vous êtes sur la page 1sur 15

CUSTOM PRODUCT TESTING TECHNIQUES

Monadic Monadic testing typically is the best method. Testing a product alone offers many advantages. Interaction between products (which occurs in paired-comparison tests) is eliminated. The monadic test simulates real life (that's the way we usually use products, one at a time). By focusing the respondent's attention upon one product, the monadic test provides the most accurate and actionable diagnostic information. Additionally, the monadic design permits the use of normative data and the development of norms and action standards. Virtually all products can be tested monadically, whereas many products cannot be accurately tested in paired-comparison designs. For example, a product with a very strong flavor (hot peppers, alcohol, etc.) may deaden or inhibit the taste buds so that the respondent cannot really taste the second product. Sequential Monadic Sequential monadic designs are often used to reduce costs. In this design, each respondent evaluates two products (he or she uses one product and evaluates it, then uses the second product and evaluates it). The sequential monadic design works reasonably well in most instances, and offers some of the same advantages as pure monadic testing. One must be aware of what we call the "suppression effect" in sequential monadic testing, however. All the test scores will be lower in a sequential monadic design, compared to a pure monadic test. Therefore, the results from sequential monadic tests cannot be compared to results from monadic tests. Also, as in paired-comparison testing, an "interaction effect" is at work in sequential monadic designs. If one of the two products is exceptionally good, then the other product's test scores are disproportionately lower, and vice versa. Protomonadic The protomonadic design (the definition of this term varies greatly from researcher to researcher) begins as a monadic test, followed by a paired-comparison. Often, sequential monadic tests are also followed by a paired-comparison test. The protomonadic design yields good diagnostic data, and the paired-comparison at the end can be thought of as a safety net-as added insurance that the results are correct. The protomonadic design is typically used in central-location taste testing, not in-home (because of the complexity of execution in-home). Paired-Comparison Paired-comparison designs (in which the consumer is asked to use two products and determine which product is better) appeal to our common sense. The paired-comparison is a wonderful design if presenting evidence to a jury, because of its "face value" or "face validity." It can be a very sensitive testing technique (i.e., it can measure very small

differences) between two products. Also, the paired-comparison test is often less expensive than other methods, because sample sizes can be smaller in some instances. Paired-comparison testing, however, is limited in value for a serious, ongoing product testing program. The paired-comparison test does not tell us when both products are bad and does not lend itself to the use of normative data. It is heavily influenced by the "interaction effect" (i.e., any variations in the control product will create corresponding variance in the test product's scores). Repeated Pairs A repeated paired-comparison taste test is exactly what the name suggests. Each respondent participates in a paired-comparison taste test (e.g., product J versus product H), followed by a second paired-comparison test (product J versus product H). However, in the second test, the products are presented as two different products (i.e., not labeled as products J and H). The purpose of the repeated paired-comparison taste test is to identify nondiscriminators, the people who don't choose the same product in both tests. That is, it is assumed that someone who chooses product J in the first paired-comparison test and chooses product H in the second paired-comparison test cannot taste (or detect) any difference between the two products. Typically, these nondiscriminators' answers would not be counted. The final results would be based only on respondents who could discriminate between the two products (i.e., based only on those who chose the same product both times). Triangle Test The triangle taste test is used primarily for "difference testing." Each participant is presented with three products and asked to taste all three and choose the one that is different from the other two. The triangle taste test is used to determine who can discriminate (i.e., consistently identify the one product that's different), and who cannot. These discriminators are in turn used as members of small expert panels (sometimes called sensory panels) to assist research and development in formulating and reformulating products, using the triangle design to determine if a particular ingredient change, or a change in processing, creates a detectable difference in the final product. Triangle taste testing is also used in quality control to determine if a particular production run (or production from different factories) meets the quality-control standard (i.e., is not different from the product standard in a triangle taste test using discriminators). Sensory Research The term "sensory research" tends to be used by research and development scientists and food scientists in much the same way that the marketing world uses the term "product testing." Many of the methods are identical or very similar. In general usage, the term "sensory research" tends to refer to small-scale product testing that is used by research and development scientists to help them in formulating new foods and beverages, and in reformulating existing food and beverage products.

Often sensory research is conducted with small panels of consumers, or small groups of employees, who have demonstrated an above-average ability to taste, or to detect, small differences in the flavor profile of a food or beverage. Ingredient Screening As a preliminary step in attempting to optimize a particular food or beverage formulation, it is valuable to develop an understanding of the relative importance and role of the different ingredients in the formulation. Typically, a number of product formulations are created, each with a high level and a low level (or absence) of a particular ingredientwith all other ingredients held constant. Each respondent usually rates three to five of these different products, depending upon the type of product. The products are rated on overall appeal as well as specific attributes (sweetness, texture, mouth feel, etc.). Who tastes which product is determined by a complex experimental design plan. The resulting data are analyzed via ANOVA and MANOVA statistical techniques, as well as regression and discriminant analyses. Product Optimization Product optimization refers to the process of improving a product until it reaches a maximum level of consumer satisfaction or acceptability. A variety of research methods can be used to achieve an optimal product, but the term "product optimization" most typically refers to a structured process in which various ingredients are systematically varied to create a number of different products. These products are then rated by a sample of category users, with each respondent rating three to five different formulations on overall appeal as well as rating specific qualities (moistness, saltiness, color, etc.) of the products. The resulting data is then analyzed by ANOVA and MANOVA, regression and discriminant analyses, and (depending upon the design) by response surface analyses. The output of the analyses is a prediction of the product formulation that would be optimal.

Product Testing By Jerry W. Thomas (Decision Analyst) Based upon 30 years of marketing research experience, spanning thousands of research projects, I am convinced that product testing is the single most valuable marketing research that most companies ever do. The great value of product testing is, perhaps, best illustrated by some of its many uses. It can be used to:

Achieve product superiority over competitive products. Continuously improve product performance and customer satisfaction (i.e., to maintain product superiority, especially as consumer tastes evolve over time). Monitor the potential threat levels posed by competitive products to understand competitive strengths and weaknesses. Cost-reduce product formulations and/or processing methods, while maintaining product superiority. Measure the effects of aging upon product quality (shelf-life studies). Implicitly measure the effects of price, brand name, or packaging upon perceived product performance/quality. Provide guidance to research and development in creating new products or upgrading existing products. Monitor product quality from different factories, through different channels of distribution, and from year to year. Predict consumer acceptance of new products.

Companies committed to rigorous product testing and continuous product improvement can, in most instances, achieve product superiority over their competitors. Product superiority, in turn, helps strengthen brand share, magnifies the positive effects of all marketing activities (advertising, promotion, selling, etc.), and often allows the superior product to command a premium price relative to competitors. Most companies, unfortunately, do very little product testing. Few companies really understand the power of continuous product improvement and product testing. Even fewer companies know how to do product testing the right way. Fewer yet budget enough money to support a serious product-testing program. These shortcomings in the majority of companies create opportunities for the minority of companies who are dedicated to continuous product improvement. How can companies realize optimal value from product testing? Product Testing Secrets The secrets to truly accurate and actionable product testing are several:

1. A systems approach. The methods and procedures of product testing should


constitute a standardized system, so that every like product is tested exactly the same way, including identical product preparation, age, packaging, and coding; identical questionnaires (of course, parts of the questionnaire must be adapted to different product categories); identical sampling plans, typically employing blocking-screening grids to ensure matched samples; identical data preparation and tabulation methods; and similar analytical methods.

2. Normative data. As products are tested over time, the goal is to build normative
databases, so that successive product tests become more meaningful and valuable. The normative data, or norms, continually improve a companys ability to correctly interpret product-testing scores, and the norms help reveal exactly how good, or how bad, the test product is.

3. Same research company. Use one research company for all of your product
testing. This is the only way you can make sure all tests are conducted in exactly the same way.

4. Real environment test. If the product is used in offices, it should be tested in


offices by people who work in offices. If the product is typically used at home, it should be tested at home. If the product is consumed in restaurants, it should be tested in restaurants, and so on. In general, this kind of real environment test will produce the most accurate results. For example, for food products, an inhome usage test is almost always more accurate and predictive than a centrallocation taste test.

5. Relevant universe. Sampling is a critical variable in product testing. For new


products or low-share products, the sample should reflect, or represent, the brand share makeup of the market. For well-established, high-share or highly differentiated products, the sample should contain a readable subsample of that products users, and a readable cell of nonusers. If the product category is underdeveloped (e.g., a relatively new category), then the sample should include nonusers of the category, as well as users. Also, its always important to represent medium to heavy users of the product category in the final sample. In summary, if a companys brand share is very low, its important to assign more weight (or importance) to the opinions from nonusers of the brand. If brand share is very high, then what brand users think is more important.

6. Critical variables. Product performance and quality must be defined from the
consumers perspective, not the manufacturers. What aspects of the product are truly important to consumers? What critical variables determine the consumers satisfaction with the product? These critical variables must be identified for each product category (typically, with focus groups or depth interviews) to design an accurate product-testing system.

7. Conservative actions. The formulation of an established product should never


be changed without careful testing and evaluation of the new formulation. Once you are sure you have a better product, introduce it into a limited geographic area for a reasonable time period (several product repeat purchase cycles).

Then, and only then, roll the new product out to all markets. The smaller the market share, the greater the risks that can be taken with a new formulation. The larger the market share, the more conservative one should be in introducing a new formulation. The Major Techniques The monadic, sequential monadic, paired-comparison, and protomonadic are the most widely used research designs for product testing.

1. Monadic testing typically is the best method. Testing a product by its own offers
many advantages. Interaction between products (which occurs in pairedcomparison tests) is eliminated. The monadic test simulates real life (thats the way we usually use products, one at a time). By focusing the respondents attention upon one product, the monadic test provides the most accurate and actionable diagnostic information. Additionally, the monadic design permits the use of normative data and the development of norms and action standards. Virtually all products can be tested monadically, whereas many cannot be accurately tested in paired-comparison designs. For example, a product with a very strong flavor (hot peppers, alcohol, etc.) may deaden or inhibit the taste buds so that the respondent cannot really taste the second product.

2. Sequential monadic designs are often used to reduce costs. In this design,
each respondent evaluates two products (he or she uses one product and evaluates it, then uses the second product and evaluates it). The sequential monadic design works reasonably well in most instances, and offers some of the same advantages as pure monadic testing. One must be aware of what we call the suppression effect in sequential monadic testing, however. All the test scores will be lower in a sequential monadic design, compared to a pure monadic test. Therefore, the results from sequential monadic tests cannot be compared to results from monadic tests. Also, as in paired-comparison testing, an interaction effect is at work in sequential monadic designs. If one of the two products is exceptionally good, then the other products test scores are disproportionately lower, and vice versa.

3. Paired-comparison designs (in which the consumer is asked to use two


products and determine which product is better) appeal to our common sense. Its a wonderful design if presenting evidence to a jury, because of its face value or face validity. The paired-comparison can be a very sensitive testing technique (i.e., it can measure very small differences) between two products. Also, the paired-comparison test is often less expensive than other methods, because sample sizes can be smaller in some instances. Paired-comparison testing, however, is limited in value for a serious, ongoing product-testing program. The paired-comparison test does not tell us when both products are bad. The paired-comparison test does not lend itself to the use of normative data. The paired-comparison test is heavily influenced by the

interaction effect (i.e., any variations in the control product will create corresponding variance in the test products scores).

4. The protomonadic design (and the definition of this term varies greatly from
researcher to researcher) begins as a monadic test, followed by a pairedcomparison. Often, sequential monadic tests are also followed by a pairedcomparison test. The protomonadic design yields good diagnostic data, and the paired-comparison at the end can be thought of as a safety netas added insurance that the results are correct. The protomonadic design is typically used in central-location taste testing, not in-home testing (because of the complexity of execution in the home). Nonpackaged Goods Categories While most product testing is conducted in the food and beverage industries, the concepts and methods of product testing are applicable to virtually all product categories, although the structure and mechanics of execution will vary greatly from product category to product category. For example, computer software can be tested, furniture can be tested, store environments can be tested, dog food can be tested, airline service can be tested, equipment prototypes can be tested, etc. Competitive Advantage The ultimate benefit of product testing is competitive advantage. Product superiority is the surest way to dominate a product category or an industry. Companies dedicated to ongoing product improvement and product testing can achieve product superiority, and achieve a competitive advantage of great strategic significance. Companies that ignore product improvement and product testing, on the other hand, may wake up one morning to find themselves on the brink of extinction from a competitor who has built a better mousetrap.

IPSOS Product testing plays an important role in both the innovation process for new products and the brand management of existing products. A great new idea or brand repositioning is doomed to failure unless the product delivers on the concept. Product testing can provide the answers to important marketing questions, including:

Which product formulation is the best among my alternatives? How well does my best formula stack up against competition? Does my product fit with my concept? How much sales revenue will this product generate? Will a lower-cost formula perform as well as the product currently on the market? Will a product improvement attract new customers without alienating current How can my product formulation be optimized?

customers?

Answers to these critical questions will steer the course of product development and determine the success or failure of the product in the marketplace. The best approach to obtain maximum learning from your product testing is to design a product testing system. Three benefits of a system are: (1) consistency of approach in terms of sample size, confidence range, methodology options, questionnaire design, analysis, and report format; (2) the ability to generate a normative database on key measures repeated from study to study; and (3) the opportunity to conduct meta analysis across product tests to understand the product elements that drive ratings or preference. The design of a product test must be handled with care. The methodology employed depends on the objectives of the study (i.e., the marketing questions listed above). A key consideration is whether the test will be monadic, paired comparison, proto-monadic, sequential monadic, or multi-product (three or more products). Another important factor is whether respondents will be exposed to blinded test product or identified test product. After decades of conducting product tests for Fortune 500 consumer packaged goods companies, the Consumer Products Division of Ipsos Insight has established guidelines to ensure that product tests are designed correctly. Monadic Versus Paired Product Tests There are two general types of product tests: monadic and paired, which includes paired comparison, proto-monadic, and sequential monadic tests. Another type of product test

is multi-product (three or more products). These tests are very useful when the goal is to optimize a product by determining the product elements that drive performance. Multiproduct tests require product formulations based on an experimental design of product elements. As such, the design of multi-product tests is quite different from monadic or paired tests. Our subsequent discussion focuses only on monadic and paired tests, which are described below. Monadic Test Each respondent tries one product and answers one survey relevant to that product. Paired Comparison Each respondent tries two products and then answers one survey, which includes questions asking the respondent to compare the two products. Proto-Monadic Each respondent tries two products, and answers two surveys. Specifically, each respondent tests the first product and answers a monadic survey and then tests the second product and answers a comparative survey. The order of products tested is rotated among the respondents. By comparing monadic results between cells, you will learn whether the differences between the products are meaningful when tested separately; by pooling comparative results across cells, you will learn which product is preferred. Sequential Monadic Each respondent tries two products and answers two surveys, which are consecutive monadic surveys. Specifically, each respondent tests one product and then answers a monadic survey. Next, each respondent tests a second product and then answers a second monadic survey. Preference questions are asked at the end of the second survey. The first product is evaluated without knowledge that a second product will follow. Sequential monadic tests are used to obtain both monadic and comparative ratings. However, once the first product is tested, the ratings of the second product are no longer monadic and can be difficult to interpret. For this reason, we recommend proto-monadic tests over sequential monadic tests whenever possible. The monadic and paired methods differ from one another largely based on two issues: validity and sensitivity. Monadic testing offers greater validity as consumers use only one product at a time as they would in the real world. Paired testing offers greater sensitivity,

as consumers are exposed to two separate stimuli, and using products one after the other magnifies differences. So, how does one decide which design to apply? Generally, monadic product tests should be used when there are readily discernible product differences. This scenario typically occurs during product development when the issue is whether consumers like the product or hate it. Monadic tests are also very suitable for testing innovative products (for which no benchmark or competition exists) and for testing products having long purchase cycles (which require a long usage period making it hard to compare products). When the variations between alternative formulas are minor, monadic tests are unlikely to pick up differences without large base sizes and, consequently, high costs. In these cases, paired comparison, proto-monadic, or sequential monadic tests are recommended because they provide better discrimination. This scenario occurs when testing product reformulations, in which there would be only subtle differences among the test alternatives or versus competition. Paired tests are more feasible when the products tested have a short usage cycle, or when they would normally be used almost simultaneously. Many clients prefer proto-monadic/sequential monadic designs, in order to provide both sensitivity and validity in the same test. Although these paired tests are usually less expensive than monadic tests, they usually take longer to execute. Blinded Versus Identified Product Tests Respondents in a product test are exposed to either blinded or identified test product. In blind tests, the test product is disguised so that the testeroften a brand usercannot readily identify it. The choice of blind versus identified depends on the stage of product development and how the results will be used. When to Use Blinded Product Tests Test product should be blinded when the purpose is to compare product formulations. When blinded product is used, no brand expectations exist; hence, the performance qualities of each formulation being tested are magnified. Blind testing should be conducted during early stages of product development or during restages. Product tests should be designed to use blinded test product when the manufacturer has the following objectives:

What formulation leads to optimal perceptions?

How do the product components interact? Is there synergy? Which attributes drive overall product performance/consumer liking? How does my test product compare to my current product based on formulation How does my test product compare to my competitors products based on If a more expensive formulation is used, will consumers notice a difference? If a less expensive formulation is used, will consumers notice a difference?

alone? formulation alone?

Before respondents are exposed to blinded product, the product should be labeled with neutral letters and/or numbers. It is important not to use labels like A or B, or 10 or 100, as there is an implied ranking to these. Labels like J38 and K23 should be used. When analyzing the results of a blind product test, it is important to recognize that differences between products are typically overstated. This overstatement is acceptable since the goal is usually to maximize the opportunity to see differences. However, if differences are not observed in a blind test, then it is not likely that differences will be observed during an identified test or in the real world. Even if differences are observed in a blind test, they still may not be recognized in the real world. For that reason, we recommend following up a blind test with an identified test. When to Use Identified Product Test When respondents are exposed to an identified product, they bring to the test the same expectations they would have when using the brand in the real world. Thus, identified tests reflect real-world evaluation of product performance. A product test should be designed as an identified test when the goal is to measure product performance taking into account the brand name. For example, when manufacturers implement a formula change, they often simultaneously reposition the brand. A concept/product test is then appropriate for determining whether the reformulated product delivers based on the new positioning. Identified product is used in this type of test, which is typically done monadically. In keeping with the attempt to reflect real-world conditions, identified testing should be employed in the natural settings of respondents own homes, with a usage period long enough to allow repeated opportunities to use the product. For new-to-the world consumer packaged goods, if repeat usage is uncertain, the usage period may be extended for a long period. For example, some products may be used several times before they are discarded, wear out, or are used up.

It is also important to monitor use over a long enough time to evaluate when performance declines. Other times it may be of interest to see exactly how respondents use the product, specifically, how often, for what occasions, and how much is used on each occasion. In-home testing allows for these scenarios. When analyzing the attribute ratings of an identified product test, it is important to recognize the halo effect of recognizable brands. A popular brand will score higher than a less popular or less-known brand among a general population sample. These differences can be reduced by analyzing results within brand user groups. It is especially important to evaluate identified product tests among brand users and competitive users to see if new formulations appeal to new users or alienate current users. When you have these halo effects, it is more difficult to disentangle which attributes are driving product performance. In these cases, we recommend applying more advanced statistical techniques, such as ridge regression or Shapley Value analysis. Still, identified tests provide an evaluation of the total product offering. Product tests must be designed with care. It is critical to correctly identify the business objectives of a study so that a test can be designed which correctly focuses on validity versus sensitivity or a combination of both. It is also important to determine whether the brand should influence the results, and to analyze the data carefully and deeply when the brand does have an influence. In general, blind product tests are used to understand product performance, and, consequently, they are often used in paired tests (e.g., tests comparing the test product to the current product or competition). Identified product tests are used more for marketing purposes and are typically used in monadic tests (e.g., concept/product tests to understand the concept/product fit).

Custom Marketing Research Burke

Product Research, Analysis, And Testing Introduction Product research is a general term that encompasses a wide array of custom designs and applications, with the common thread being the evaluation of a product (or service) by the end-user or target customer. Following is an outline of Burke's view of product research, including some of the more common methodologies used by Burke in the product testing arena, presented in order to understand their respective strengths and limitations. Burke has great depth of product research and testing experience, spanning a wide range of both consumer packaged goods and non-packaged goods, such as PC software, new types of financial accounts offered by banks or brokerage firms, or a new phone system for a business. The utilization of the discussed test designs that follows will use a consumer food product as an example (e.g., ice cream sandwiches) but would still apply to other consumer or business-tobusiness services and durables, in most cases, with just slight modifications. Prototype Testing vs. Reformulation Testing It is initially important to realize that the ultimate decision on which research design to use will, of course, depend upon the specific objectives of that particular test. No single design can unequivocally be considered "the best". In most cases, the objectives of the test will be contingent upon the development or life-cycle stage of the product being tested. In the early stages of product development, only a prototype may exist, and the objectives typically focus on the optimization of features and characteristics that would maximize customer appeal. In addition, product research can be used to identify the positioning strategy that best translates the product features into salient consumer benefits. Let's assume our hypothetical ice cream sandwich marketer has determined via concept screening that consumers have an interest in an ice cream sandwich made with mint chocolate chip ice cream, instead of vanilla. An initial product test should determine if the taste meets the expectations. In essence, the goal is to confirm that what sounds like a good idea can be properly executed. Data from a product test would also allow fine-tuning of the product attributes to insure the brand had the right texture, creaminess, sweetness, strength of flavor, etc When a final or finished product exists, but still prior to introduction, a test of the product versus the competition is frequently beneficial. This serves to identify

well as confirm that the brand positioning is on target. Identification of the attributes that drive preference of your brand over the competition can then be leveraged in advertising. Once on the market, product testing of established brands is usually conducted with one of two purposes in mind. First, as a quality control measure, with the goal being to maintain the standards of the delivered features over the life of the brand. Second, if potential improvements can be made to the product, a reformulation is tested. The area of product reformulation testing is quite common and can also be divided into two general areas. First, the product may be reformed in order to capture additional market share, as evidenced by promotion of "new and improved" features. Here, the objective is to determine if the reform is truly superior to the original. The second area is cost reduction reformulations. Typically, the product manufacturing process is changed, perhaps through improved technology at the factory, or the substitution of a less expensive raw material or ingredient. Assume the brand manager of our hypothetical ice cream sandwich has found a lower cost supplier of chocolate chips. The end result is a product which is technically "different" from the original, but hopefully exactly the same from the customer standpoint. While switching suppliers would result in significant savings to the bottom line, it is desirable to insure the higher profit margin is not offset by a loss of share due to product dissatisfaction. Indeed, reformulations of this type are introduced with no promotion, and the objective of a product test is to determine if consumers can discriminate between the original and the reform. Testing among the franchise of your brand users is necessary to meet this objective. Research Designs For Product Testing While the above situations call for product testing to answer a marketing issue, there is still the design of the product test itself to be decided. There are two basic product test designs that are commonly used in research, the monadic and the paired comparison. Monadic Product Tests. In monadic testing, a respondent tests a single product and provides an evaluation of that product. Data collected typically includes variables such as purchase interest and ratings on attributes. If there is more than one product to be tested, matched groups of respondents would test each product, with the data collected from each group being compared to each other. Paired Comparison Product Tests. In a paired comparison test, respondents use two products in sequence, with no questioning in between. After both products have been used, they are asked to rate each and state a preference. Because questions are not asked until both products have been tried, the

Vous aimerez peut-être aussi