To provide some context to my question, I have been tasked with identifying features that have an effect on sales for a company. This company owns 6 stores that are all located in different areas. 3 stores are type X and 3 stores are type Y. Each store is different in terms of area (sq. ft.) and this data is provided.

The dataset I have at my disposal consists of 8 columns:
  1. Store Number (1 to 6)
  2. Store area (Unique for each store)
  3. Store Type (X or Y)
  4. Week Number (1 to 6)
  5. Sale Week? (Yes/No) (FYI weeks 3 and 6 are sale weeks)
  6. Temperature (Continuous and unique for each store and week)
  7. Fuel price (Continuous and unique for each store and week)
  8. Total Sales (Unique for each store and week)

With 6 different stores and data over 6 different weeks, this gives me 36 different sale figures.

Going back to my task, I could, for example, hypothesise that sale weeks will have an effect on total sales. I can test this by carrying out a t-Test on the two groups (Sale Week/Non-sale week) and seeing if there is a difference in average sales between the two groups. I could also hypothesise that temperature can have an effect on sales, so I could split the continuous temperature data into three equal bands and call them low/medium/high and carry out one-way ANOVA to see if the average sales are different between the groups. If they are, then could I assume that temperature has an influence on sales. I could repeat this for all different hypotheses ("does store type have an influence on sales?" etc.).

However, the issue that I am facing is that I know I cannot keep doing multiple t-tests/ANOVAs on every different factor as I would end up analysing the same data twice at some point and there may be an increased risk of committing a Type I error. Is there a better way to approach this whilst minimising the risk of a Type I error? Or is this inevitable when analysing multiple different factors?

I would use multiple regression although I have to use ANOVA/t-tests for the analysis.