We implement NNCLR and a novel clustering-based technique for contrastive learning that we call KMCLR. We show that applying a clustering technique to obtain prototype embeddings and using these prototypes to form positive pairs for contrastive loss can achieve performances on par with NNCLR on CIFAR-100 while storing 0.4% of the number of vectors.