Webinar

Reporting for conjoint

The final webinar in our series on DIY Conjoint teaches you how to extract insight and report on your Conjoint data.

In this webinar you will learn

Here’s a little summary of some of the subjects we cover in this webinar

You will learn how to:

  • Build a simulator with the click of a button
  • Calculate and chart utilities, importance scores, and willingness-to-pay
  • Compute demand curves and indifference curves
  • Automatically optimize products to maximize market share or profit
  • Calibrate your model to the real world

This is the fourth in our series on conjoint, so please check out our earlier introductionexperimental design and statistical analysis webinars.

Transcript

I am going to discuss how to figure out what the results of a conjoint study mean. This is the fourth and final in our conjoint series. The previous webinars dealt with an overview, experimental design, and statistical analysis.

Overview

My focus today is on the general principles, but I will be showing things in Displayr.  Everything except for the simulators works the same way in Q. You cannot nice create simulators in Q.

When we report a conjoint study we typically are trying to do two things:

  • Understand preferences
  • Predict behavior

Today, we will start by looking at average utilities, importance, correlation, segmentation, substitution maps, and indifference curves. Then, we will look at predicting, covering simulators, optimizers, demand curves and predicting sales and market share.

 

Hierarchical Bayes (no questions left out for cross-validation)

Last webinar we looked at how to create the best statistical model describing the data.In this data, we asked people six choice questions, and I used 5 to estimate the model, and one for cross-validation, to see how accurate the model was. Now, I am using all the data, so that we use all the information available when calculating utilities.

Outputs like this one are great for understanding the model, but too much for most clients.

Most clients like to look at utilities plots. There are lots of choices about how to format these.

Let's look at this as a column chart.

In our model I've estimated an attribute called alternative. This attribute shows which column the alternative appeared in in the questionnaire.

Why would I do that? My experience is that there is often a tendency of people to favor the alternatives on the left. By including this attribute in the model, I can remove this effect from the data. But, I don't want to show it to my clients.

If you have a good memory, you will remember that the utilities we were looking at on the previous table were a bit different to these. The way they looked was this.

As you will hopefully remember, the first level of each attribute has been given a utility of 0. This is an arbitrary rule, and it can confuse clients.

A solution is to change the data so that utility of the least appealing attribute is 0. We can also make it a bit easier to read by sorting the levels within attributes, from lowest to highest. For example, it would be nicer if Brand went from Lind to Godiva, to Dove, to Hershey. The difference between the utility of the lowest and highest attribute level is sometimes called importance. In this example, Sugar is the most important attribute. We can make the chart better by sorting by importance.

The utility shown here is measured on something called logit scale. Some people find this a bit weird and like to rescale the data to be out of 100. And, some people like to change it so that the rather than the minimum value being 0, instead the average utility is 0 and the average importance is 100.

The key thing to appreciate about all of these scalings is that they say the same thing.

 

The scaling of utilities is about presentation

To make this point a bit easier to see, here I've just shown the utilities for brand. The various options never change the relativities. No matter how we show it, Lindt is least appealing, Godiva a bit more appealing, Dove is more appealing, and Hershey the clear winner. So, what do the results tell us?

On average, the more sugar, the happier people are. People prefer to pay less, but, there's only a marginal benefit in having the price at $1.49 versus $1.99. So, the smart play is probably $1.99. It looks like $2 is a threshold that it's expensive to cross. Milk chocolate is the favorite. Hershey is the most appealing brand. People want chocolate made in the USA with almonds. If no almonds, then no nuts. They really don't want hazelnut. Fair trade chocolate would be nice, but it's less important than everything else.

 

Willingness-to-pay/Dollar-metric scaling

One way of making the scaling less subjective is to report in terms of willingness to pay. This is also known as dollar-metric scaling. We have researched a price range of $2.49 minus $0.99 which is $1.50. From this chart, we can see that a utility of 1.41 equates to a saving of $1.50.

So, it follows that our scale is that 1 utility = 94 cents. We can thus rescale all the utilities in terms of dollars.

This is, for example, sometimes used as a way of quantifying brand equality. In this case, the analysis would say that Hershey's brand equality relative to Lindt is 1.12 cents per bar. It's a sexy thing to do as clients feel they understand it. But, in my experience, they never really do. And, in an earlier life I was a part of some big court cases where these misunderstandings led to some damaged reputations and unsuccessful litigation. So, while appealing, I recommend you don't do it.

The problem is that when clients look at it, they go. “Ah, this means that Hershey can charge a premium of 1.12 above Lindt.” Is that right? No, it's not. The correct interpretation involves an understanding of the economic concept of consumer surplus. But I'm boring myself now, so will just leave it at: don't do it.

 

Importance

It's sometimes useful to summarize all the utilities into a smaller set of numbers, which are referred to as importance scores.

On the way to explaining how, a small diversion. And, it will be quite technical. But, after a couple of minutes I will make it all more accessible again.

 

Numeric price attribute

You will hopefully recall, in the last webinar, we found the best model had price coded as a numeric attribute. That is, without that nice threshold effect. So, why did I show the threshold effect before?

When we do a linear effect, we are saying that we expect the average utilities to be spaced in proportion to the dollar values. If this linear effect was true, it would mean that where this line crosses throw the axis ere, is a good representation of the utility of 1.99.

Now, by eye alone, you can see that the $1.99 utilities are almost all to the right of the lie. So, this tells us that the price is not a linear effect.

There are two difference between the way we have treated price in the model: One is whether the price is on average linear or not. I've just shown it's not. The other is in terms of how they explain differences between people. By deduction, the linear price model is better at explaining differences between people.

 

Importance is the difference

Here's the utilities plot. Importance usually is defined as the difference between the most appealing and least appealing level of each attribute.

So, you would say here that Sugar level is the most important attribute, followed by cocoa strength, price, and brand. But, there's a bit of a trap here. And, it's a big trap, which means that I stopped reporting importance many years ago. Compare price and brand. We've said that Price is more important than brand.

But this importance is just determined by which attribute levels we had tested. If we'd only tested $1.99 and $2.49, we would instead be saying it's the other way around and brand is much more important than price. This distinction is very important. But, in my experience when you talk about importance, it just gets lost, and people misunderstand the results., drawing lots of conclusions that aren't in the data.

 

Strategic importance

Rather than defining importance as based on the full range of levels tested, I find it's more useful to define it based on what's strategically relevant.

Let's say our client is Godiva, and they are selling a 2-ounce bar with:

  • Standard sugar
  • Dark chocolate
  • $2.49
  • Belgium
  • No nuts
  • No fair trade offer

Then,

  • Sugar-free is really negative importance. Because sugar-free detracts from the current offer.
  • They could change from Dark to Milk, but it's not that important. Its importance is now defined as 13.
  • Note that before we thought cocoa strength was important, but that's really just because most people don't want white. When we are just thinking about Dark vs Milk, it's not that important.
  • Moving further to the right, nuts becomes much less important, as we wouldn't do hazelnuts, and almonds are only a bit more appealing than no nuts
  • Fair trade becomes more important.

Let's now create some variables measuring importance. We will go back to the HB model and save the utilities.

This time I will save out the utilities with a mean of 0 and a maximum range of 100, scaled separately for each respondent.

We will insert a new R variable.

Now we will group them into a variable set.

In Displayr:

Select variables

Data Manipulation > Combine

Name: Importance

 

Average importance

Note sugar free is in a sense most important, but it's negative. Of the others, we've now made price the most important.

 

Correlation

Let's look at correlations.

 

Correlations between importance or utilities

In this study, I focus on the goal of investigating opportunities for Godiva to launch a 2-ounce reduced sugar chocolate bar in the US. Why? Because I wish they would. We can either correlate utilities or importance scores. I tend to find importance more easy to understand.

The first column is showing us what correlates with sugar free. We can see a negative correlation with price. This is good news. It means that the more people like sugar free, the less important they find price. Which means we can charge a higher price. We can also see a positive correlation with Almonds, telling us we should go with a sugar free almond.

 

Segmentation

I often find segmentation the best way to understand variation in conjoint data

 

kMeans

We can do it using the utilities. But, it's often more interesting to use importance scores

 

In Displayr:

Insert > Group/Segment > kMeans

 

Well, that's quite cool.

Remember how we previously saw that people preferred sugar in their chocolate on average. We've found a segment of 41% of the market that prefer sugar free chocolate with almonds!

In the real world, I'd play around a bit with this such, exploring more segments. Please see our eBook on segmentation for more information about this.

 

Substitution maps

Sometimes you have studies where your attributes have lots of levels.

Utilities tell you which ones are more preferred. But a separate question is which ones are substitutes. Substitution maps can help here.

 

Substitution maps show

Looking at the fast food market, where an attribute was food type, we can see here that say, Chicken meals and Pizza are closer substitutes than Chicken and Thai.

It's a pretty exotic topic, so I will refer you to the post if you want to create them yourself.

 

Indifference curves

Indifference curves

 

Indifference curves show

Indifference curves are a way of showing relative preferences for quantities of two things (e.g., preferences for price versus delivery times for fast food). In this example, People prefer low prices and high quality (i.e., the utilities are highest at the bottom-left and lowest at the top-right). The bottom-right corner shows that the utility for a restaurant with a quality rating of 3.0 and a price of $10 per diner, has a utility of around about 0 (read from the legend).

If we follow the curve from the bottom-right corner up to where it is at a restaurant quality of 4, we can see that this equates to a price of around $15. This tells us that:
3/5 quality + $10 = 4/5 quality + $15. Putting it a different way, it tells us that improving quality from 3 to 4 out of 5 is worth about an average of $5 to the average diner.

But, be careful. As with the willingness to pay, you can over-interpret this.

 

This post shows how to compute indifference curves for conjoint analysis models.

 

 

Simulators

Now for the fun stuff!

 

Simulator (2-ounce chocolate)

The conjoint simulator is perhaps the sexiest output in the world of market research. It simulates a market. You change the design of the products. And, it makes predictions.

Very cool.

Currently Godiva has a price of 2.49 and a share of 14. If Godiva dropped its price to 99c, its share would go up to 22.2%.

How do we create them? It's nice and easy. We've added a few bells and whistles since I demoed this feature in the first webinar.

We go to our output from our model,

 

In Displayr:

Go to Hierarchical Bayes (no questions left out)

and click this button.

Press SIMULATION > Create simulator (EXPERIMENTAL).

How many alternatives do we want? Let's go with 4.

Do we want to exclude the Alternative attribute? Yes. This will give each alternative the default utility of alternative 1, which is 0.

Now we can format it as we wish. Change first alternative to Godiva.

 

Profit

This is currently showing preference share. For the moment let's assume this is the same thing as market share. How do we turn this into sales forecast? We will start by duplicating the R Output.

Now we will edit the code

share = preference.shares.5[, 1] / 100

market.size = 18 # in billions

units = share * market.size

What about revenue?

Add:

price = as.numeric(gsub("$", "", cPrice.1.5, fixed = TRUE))

revenue = units * price

And a simple cost function gives us profit. Of course, you can make it is complicated as you want:

cost = units * 1.2

profit = revenue - cost

We've now got a profit simulator. What happens if we drop price? Our shares go up. But we make a loss. A simulator allows you to manually run difference scenarios.

 

Optimizer

An optimizer does this automatically. We create it in much the same way as a simulator. As I mentioned in the previous few weeks, I'm not a big fan of including None of these in simulators or optimizers. But I'll show you how to do it in case you want to.

 

Hierarchical Bayes with None of these

I'm just going to simulate a market with 2 alternatives, a new sugar free chocolate.

And, the none of these.

Now, let's assume that in the market at the moment, Hershey have a 2-ounce sugar free bar for $0.99, in milk chocolate.

Now, the question is, what should Godiva introduce?

Let's start by assuming they match Hershey:

  • Brand: Godiva
  • Price $.99
  • Milk
  • Sugar free
  • Origin: Belgium
  • Nuts: No
  • Ethical: Blank

This gets us less than half the share of Hershey.

Now, let’s explore other options. So, Displayr's churned through the 384 possible scenarios. Let's sort them.

So, if our goal was to maximize market share, we would recommend $0.99, Milk, 50% reduce sugar, single origin Venezuelan beans, Almonds, and fair trade.  And, this gives us the same share as Hershey.

 

 

Demand curves

Demand curves show how our demand relates to price. We can just use the simulator and run different price scenarios. Or, we can use the optimizer. Let's choose our best Godiva option. And, what we are left with is a table showing demand.

 

In Displayr:

  • Insert > Visualization
  • Area
  • Copy table
  • Paste into Output
  • Show as small multiples.

 

Price elasticity

In the last webinar, I was asked about how to compute price elasticity from conjoint.

Here's the formula. I plugged in the numbers from one of the earlier simulations.

 

From preference share to market share and sales

The shares we have been showing are predicted shares, based on what people chose in the questionnaire.

To produce sales forecasts, we typically need to do two things:

  • Convert preference shares to market shares. This is very hard.
  • Convert the market share to sales. This is usually easy. Just multiply the market size by the share, as shown before.

What makes it hard to compute market share?

 

Assumptions

The first issue is ecological validity. Are people at all interpreting the choice questions in the same way that they evaluate products in the real world. Sometimes. We just have to hope so.

Then, did we model the data appropriately. We hope so. We did look at ways of checking this last week.

The next four assumptions are things we can improve on. I will give you a change to read them.

 

Rule: logit draw

In the previous webinar, I talked about how more modern analyses allow you to take into account respondent uncertainty. If we want the simulator to take uncertainty into account, we need to explicitly do it. And, this will only work if we first save the number of draws.

This is how we do it.

Note that Godiva's share is 14.2% and Hershey is 45.4. To take into account uncertainty, we need to scroll down to this hidden calculation and change the rule

Rule: Logit Draw

Note that Godiva has gone up to 17.3 and Hershey down to 40.7. So, dealing with uncertainty does change our results. It's considered the smart thing to do. But, it's not that common as it can't easily be done in older software. And, even with modern software it's slow.

 

Availability

Now, let's say that in reality we know that Godiva's only distributed in the North East.

We would just add a filter.

In Displayr:

  • FILTERS & WEIGHT > New
  • Region
  • North East

 

And, the shares dropped, showing us that in the North East the share for Godiva is 12%.

You can also allow your end users to create filters of course. You would then build separate simulations for each region, and then just add them up, weighting by the size of the region. Let's turn this off and deal with a more complicated case.

Sometimes you want to adjust distribution separately for each respondent. For example, based on the distribution in the shop that they regularly visit. How would we do that? Well, we will need to create a table that shows the distribution for each respondent.

 

In Displayr:

  • Insert > R Output

 

Here I will just do again just using the region data to illustrate the point.

availability = cbind(TRUE, Northeast == "Selected", TRUE, TRUE)

Now, look at this output. It's got one column for each alternative.

There are 403 respondents. I've got a TRUE here in column 2 based on being in the North East, and I've assumed complete distribution for the other alternatives. I can set my assumptions by clicking on this output here.

So, after factoring in the availability data, Godiva's now down to 1.7% share.

 

Scale

Answering conjoint questions is a bit boring. People make mistakes. Shopping is also a bit born. We often are lazy and careless shoppers. We make mistakes. A basic conjoint simulator assumes we make mistakes at the same rate in the real world as in the conjoint. We can tune choice models to deal with making more or less mistakes. The way we tune is using a scale parameter.

A value of 1 indicates we think the mistakes are the same in the real world as the experiment.

Let's set it at 0.5.

Note that when we do this all the shares become more equal.

If we thought there was less error in the real world, we get bigger differences.

If we have some market share data, we can tune it to match the market share.

I'm going to switch back to ignoring uncertainty. This is just to save time. When we use the draws, it has to scale 100 times the amount of data, so it is boring to watch.

 

In Displayr:

  • Rule: Logit respondent draw
  • Scale to shares

 

We then type in our expected market share, as proportions that add up to 1.

Shares: .26, .16, .12, .46

It tells us that the scale factor is 1.2014.

Note that there is not a perfect matching of our preference shares with the shares we have scaled to. But, the shares are as close as we can get by changing the scale parameter. We can then copy this and fix it for all future simulations.

 

Calibration

Another assumption of our model was that we have not ignored any key attributes.

If we did, that could explain a difference between our shares and the real world. We can fix this using Calibration.

Note in this case our shares now exactly match the ones we put in. As with the scale factor, we need to lock the values in. A lot of care should be exercised before using calibration. It's only valid if the reason we have failed to predict perfectly is that we are missing some attributes. But, that assumption's a bit tenuous. So many other things are also happening that could explain the difference between the simulator and the actual share.

I don't like to use calibration as I think it makes clients believe that the research is more accurate than it is, and then they rely on uncritically.

 

Summary

So, we have looked at lots of techniques for understanding and predicting behavior. That concludes our four-part series on conjoint. If you wish to learn more about how Displayr's conjoint features can help you halve your analysis and reporting time, come view a demo with us and we'll show you how!

Read more

Conjoint streamlined in one place

See how!
close-link
I'm Online

Got 5 mins? I'm online if you want a quick Displayr demo

close-link