I'm using Affymetrix microarrays to check if there exist some differences in the gene expression of two group of animals that have been differently treated.

I'm using a t-test with permutations (as my groups only have three animals and I read that it is better to use permutation if you can not check for normality in the gene expressions in the groups). Because of the multiple hypothesis testing problem that occurs in microarrays experiments I will use an FDR correction to check for false positives in the p-values.

I'm new at these kind of testing and I have never worked with FDR before so I have a few things that I don't understand.

1- Should the FDR calculation be applied to all the p-values that I get from the t-test or just to those that are under 0.05 (I was thinking of using 5% as significans level)?

2- When I calculate FDR for all p-values I get very high FDRs for all my p-values and it is the same for many of them. Am I doing something wrong or is this due to the fact that I have around 200 p-values under 0.05 and around 11 000 p-values in total so the ratio of "significant" genes is to low?

3- What is the cut off value that people normally use? Just to check if I have understood: after I calculate the FDRs I should sort my list of p-values with respect to the FDR-values and then select all p-values under my cut off for the FDRs right? Which then would mean that, within my new corrected list of p-values, there can be cutOff*length(new_list) false positives.

Please correct me if I'am wrong. I really would appreciate if some body could help me answer these questions as I new to this kind of statistic testing.

Best regards