Yes, case 3 is possible. Here's one way it could occur.

First, we note that the estimate of a regression coefficient is biased if other explanatory variables are omitted. It can be shown that the bias is equal to (the true coefficient on the omitted variable) times (the coefficient on the omitted variable when it is regressed on the included variable). The proof isn't hard if you know a bit of matrix algebra--let me know if you want details.

Suppose the true model is Y=bX+cZ+e, where X and Z are explanatory variables, a and c are their coefficients, and e is a normally distributed error.

If we run the regression on both X and Z, we should get accurate estimates, but if we run it on X only, b will be biased. As above, the estimated b will be:

b+(c*d), where d is the estimate from the regression z=dX+u.

So, if c and d are both positive or both negative, the estimate of b is too big, while if only one of c and d is positive, the estimate of b is too small. In the case where c*d=-b, the estimate of b would be zero.

Hence, X could appear insignificant in a univariate regression if c*d is close enough to -b.

A real world example: Suppose a certain public works program gets funding from two government agencies X and Z. Each agency must give a certain fixed percentage, say b% and c%, respectively, of its budget to the program. Clearly the total funding for the program depends positively on both budget X and budget Z.

But now suppose that the budgets X and Z are both drawn from a larger pool in such a way that more money for X means less money for Z. (Perhaps X gets a random percentage and then Z gets a random percentage of whatever's left, with the final remainder going elsewhere). We now have X and Z negatively correlated.

These conditions would result in a univariate estimate of b that is biased towards 0 and which may make X appear insignificant.