A very common workflow is to index some data based on its embeddings and then given a new query embedding retrieve the most similar examples with k-Nearest Neighbor search. For example, you can imagine embedding a large collection of papers by their abstracts and then given a new paper of interest retrieve the most similar papers to it.
TLDR in my experience it ~always works better to use an SVM instead of kNN, if you can afford the slight computational hit
B. Lauser, und A. Hotho. Proc. of the 7th European Conference in Research and Advanced Technology for Digital Libraries, ECDL 2003, Volume 2769 von LNCS, Seite 140-151. Springer, (2003)
T. Joachims. Proceedings of ICML-99, 16th International Conference on Machine Learning, Seite 200--209. Bled, SL, Morgan Kaufmann Publishers, San Francisco, US, (1999)
X. Li, B. Liu, und S. Ng. Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, Seite 218--228. Stroudsburg, PA, USA, Association for Computational Linguistics, (2010)
T. Joachims. Proceedings of the Eighth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Seite 133--142. New York, NY, USA, ACM, (2002)
H. Kim, J. Choi, D. Choi, H. Choi, und P. Kim. Proceedings of the 2012 ACM Research in Applied Computation Symposium, Seite 310--315. New York, NY, USA, ACM, (2012)