I need help with Inter-Rater agreement

I'm developing a scale and my items were already checked by a group of 4 judges.

They gave a dichotomica response (yes, leave the item or no, remove it).

Now what should I do? I know I have to calculate something named "inter rater agreement", but I don't know how and I don't know which statistical program I should use.

Can someone help me? I'm desperate, thanks!
Fleiss' kappa is not appropriate if the same raters are used to rate each item. Further, especially for dichotomous data, kappa statistics are a questionable choice in general:

Questions for Peppermint:
1. What kind of stat software do you have available (SPSS? SAS? etc.)
2. How many items do you have?
3. Who told you do calculate inter-rater agreement (i.e., whom must you please here -- adviser, client, teacher, etc?)

In a worst case scenario (e.g., you're in a hurry and must do *something* quickly), the simplest choice might be to calculate Cohen's 2-rater kappa for each pair of raters (6 pairwise comparisons) and to compute the average of these individual kappa statistics.