Suppose you have a system where there are two possible outcomes, 0 or 1. Each of them is 50% likely to occur in any given trial. When a result such as 1 occurring 6 times out of 10 occurs, how can you tell if this is statistically significant? At what point does this occur? For example, if the 1 occurs 30 times out of 50, is this significant?

I would like to know the mathematics and reasoning behind this. Also, is there a way of figuring standard deviation of the results?

The reason I am asking is because a friend of mine and I are having an argument. He feels that a result of 13/20 is significant. My gut feel is that this is too small a number. My feel is that you would need it to occur at least 24/40 times before you can say that it differs significantly from the 50%.

Input is appreciated. Thanks in advance.