Kappa Coefficient Confusion

Hi, I hope that someone can help with this problem.
I need to rate agreement between multiple raters. I have used the method of Fleiss to determine this, and I thought I had done it correctly. However my reviewer has asked that I consider "Cronbach's alpha (or other measures)." As I am not a trained statistician, I have no idea how to derive this. I believe that Fleiss's kappa should suffice for measuring agreement for multiple raters and do not want to attempt any other method if it is unnecessary. Please let me know if you have an opinion on the matter. Your advice is much appreciated. Thank you. Rohit Soans.