Which clustering algorithm are you using? That determines how you apply your data mining model. For instance, if you're using k-means, the result of training a model on a set of data with a given similarity measure is to produce a set of cluster centers. You apply that model by assigning new observations based on which center they are most new using that measure the model was trained with. That model can be thought of as a pair (k, +) where 'k' represents the centers and '+' represents the measure. It makes no sense to say you've added observations to the cluster. The cluster isn't the "thing" that you've modeled. What you've modeled was (k, +). That was trained on the *original* data set. The model assigned new observations to whichever clusters they got assigned to. So on that model, what is most similar to a given k is just whatever the model determines. If you fit a new model you might end up with some (k', +) with very different clusters. You can also change the number of k that exist or use a completely different similarity measure. The point being, you aren't creating clusters. You're finding centers in that case.

Of course, this only applies to what k-means produces. If you used hierarchical clustering or knn or some other approach, you'd have a different sort of model. A hierarchical clustering model produces a dendrogram that relates every observation and you'd apply it to test data differently, but the point still remains the same. The clusters you generate aren't the end result. Those are products of the model. The model itself is the product of the clustering (cluster centers; dendrogram, etc.).