# Thread: Calling the power of statistics to help evaluate our marketing!

1. ## Calling the power of statistics to help evaluate our marketing!

Hi all,

Brief introduction: I work as an analyst @ a tech startup, industrial engineer grad. I've had some statistics coursework, but have a number of cobwebs to clear out .

Here's the problem: Our marketing department does a certain number of activities for any featured event we put on. The question is, which activities have biggest impact on driving participation?

The wrinkle: each event has some inherent level of attractiveness that is difficult to control for. For instance, it'd be akin to evaluating whether driving with windows up or down leads to better gas mileage, but let's say you're driving across country, so every day you have to drive on a different road: you can't control for road slope/grade/etc that would impact gas mileage in a significant way.

Proposed Solution: I was thinking of running 100s of events, turning off 1 activity at a time (randomly, so both large & small events get data points across activities), then using unpaired t-tests to compare the large events with all activities vs. each of the scenarios where 1 activity was turned off, then repeating for smaller events.

The issue is I just think I'd have to have a huge n for anything meaningful to come out of this, since our events can vary widely from one day to the next.

I'm thinking, instead, of using factor analysis. This could look at variance in the data, and tell us whether any of the levers we're pulling explain any of the variance.

What do you think?

2. ## Re: Calling the power of statistics to help evaluate our marketing!

I would think regression or ANOVA would work better than t test, but it seems that you are simulating the data points rather than having real data. How will you know if what you simulate will match the reality you are trying to mimic?

You have multiple confounds and need a design to address this. I believe, it has been many years, that methods such as Latin Squares permit you to test only a portion of the potential confounding features rather than all levels of each. You might look this up in a book on designs. Exploratory factor analysis will identify common factors in your data. I don't understand how they would test if certain factors had more influence on the dependent variable you are interested in.

It is entirely possible I simply misunderstand what you are looking for. I assume you have many variations of conditions and you want to find out which level of which has the greatest impact on some dependent variable. If so, you have to be able to gather data on the levels of the iv and the levels of the dv associated with it. I have never seen EFA used for that.

3. ## Re: Calling the power of statistics to help evaluate our marketing!

Seems like a case where a fractional factorial design should be used due to the logistics of the problem. This is just some info on possibly the type of design you will want to use and a keyword to help with further research. I haven't done much with them but you essentially make a few assumptions (mainly that higher order interactions won't exist) in order to investigate the factors of interest using a design that allows for the experiment to be completed before the end of the universe.

4. ## Re: Calling the power of statistics to help evaluate our marketing!

Ignore my comments on Latin Squares. I dont think it would be of any use for what you want to do (my memory was bad).

http://www.stat.wisc.edu/courses/st5...ndouts17-4.pdf

5. ## Re: Calling the power of statistics to help evaluate our marketing!

Yeah a latin square is more for when you have 2 blocking factors. Then again blocks are really just treatments that we don't really care about...

6. ## Re: Calling the power of statistics to help evaluate our marketing!

I was remembering that it reduced the number of treatments you have to do, but not in a way useful here. Been too many years since I read it.

7. ## Re: Calling the power of statistics to help evaluate our marketing!

Pardon me for being tardy. It's a besetting sin.

It strikes me that you're looking at two widely different solutions. The first, the hypothesis test, seems aimed at simply figuring out if you have any real variation from event to event. The second, factor analysis, is meant to delve into the data and determine if there is are actually only a few latent variables rather than many confounding ones. These are not in the same direction, it seems to me. Which do you want?

I think you're right that a situation with huge variance is best addressed with higher n, but as you note, that's not always practical, and you're inevitably left with a wide band of uncertainty. Consequently, in your place I'd think about applying a Bayesian technique that would at least allow you to quantify how unsure you are. Establish priors based on your experience, then work in probabilities, not in hypotheses. The outcome would be a set of probabilities that are actionable rather than simply analytical. You can do Bayesian hypothesis testing if you want, which allow you to skip the whole null hypothesis quagmire and simply evaluate several hypotheses at once. Or you could do a network that would incorporate everything you know about the situation and allow you to make somewhat informed predictions. It all depends on what you want as an outcome.

#### Posting Permissions

• You may not post new threads
• You may not post replies
• You may not post attachments
• You may not edit your posts