A very common workflow is to index some data based on its embeddings and then given a new query embedding retrieve the most similar examples with k-Nearest Neighbor search. For example, you can imagine embedding a large collection of papers by their abstracts and then given a new paper of interest retrieve the most similar papers to it.
TLDR in my experience it ~always works better to use an SVM instead of kNN, if you can afford the slight computational hit
We observed that generally the embedding representation is very rich and information dense. For example, reducing the dimensionality of the inputs using SVD or PCA, even by 10%, generally results in worse downstream performance on specific tasks.
H. Schütze, and J. Pedersen. Proceedings of the 4th Annual Symposium on Document Analysis and Information Retrieval, page 161--175. Las Vegas, USA, (1995)
D. Jurgens, and K. Stevens. Proceedings of the 5th International Workshop on Semantic Evaluation, page 359--362. Stroudsburg, PA, USA, Association for Computational Linguistics, (2010)