The issue isn't whether the data is from the same data set or not. It is a question of protection against making a false positive claim. Setting a critical value for significance of p<0.05 gives you a 95% protection against a false positive for any single test. I think of it as being like Russian roulette with one bullet in a 20 chamber revolver. There is one chance in 20 of you shooting yourself in the foot.

Now, imagine that the responses are in fact perfectly random and there is no connection or difference between the responses for any of your factors. When you do your basic analyses with your random data you will get a series of p values - it's not entirely clear how many, but at least 5. Ideally, all these p values will be > 0.05, but of course, with 5 p values you now have 5 chances of shooting your self in the foot. Your protection against a false positive has been eroded from 95% protection to about 75% protection.

This problem of the erosion of protection against a false positive when there are multiple p values is one that has always plagued statisticians. Many approaches have been proposed but none really solve the problem. The simplest approach is the Bonferroni but the others are not much different.