Is it? It seems like this would be helpful if the errors have a lognormal distribution, but I can't see why this would be a general suggestion. Which sources are you getting this idea from?

If the problem is just with non-normal errors, bootstrapping seems like a good choice if you want to stick with the basic linear regression framework. You can also think about whether you need to do anything atallabout the non-normal errors - I know we've said this tons of times, but the coefficients remain unbiased consistent and efficient (BLUE at least) if errors are non-normal but other assumptions are met. The sampling distribuiton of the coefficients won't be exactly normal if the errors aren't normal, which could theoretically muck up significance tests and confidence intervals, but it will converge to a normal distribution with larger sample sizes.

Simple to do, but makes interpretation hard. E.g., How do you interpret a regression coefficient after a square root transformation of the Y?First, is a transformation which is simplest.