A very common workflow is to index some data based on its embeddings and then given a new query embedding retrieve the most similar examples with k-Nearest Neighbor search. For example, you can imagine embedding a large collection of papers by their abstracts and then given a new paper of interest retrieve the most similar papers to it.
TLDR in my experience it ~always works better to use an SVM instead of kNN, if you can afford the slight computational hit
Wikipedia-based question answering system for natural language questions, open topic model, Wiki, Wikipedia, Knowledge Enhanced Embodied Cognitive Interaction Technology.
Recently I've had the privilege of working with colleagues at Lexis Nexis on a variety of projects in the area of artificial intelligence and natural language processing. So I am pleased to share with you the following paper, which has been accepted for presentation at the 16th International Conference on Artificial Intelligence and Law in…
Compute answers using Wolfram's breakthrough technology & knowledgebase, relied on by millions of students & professionals. For math, science, nutrition, history, geography, engineering, mathematics, linguistics, sports, finance, music....
D. Feng, E. Shaw, J. Kim, and E. Hovy. Proceedings of the main conference on Human Language Technology Conference
of the North American Chapter of the Association of Computational
Linguistics, page 208--215. Morristown, NJ, USA, Association for Computational Linguistics, (2006)
S. Zhao, M. Zhou, and T. Liu. Proceedings of the 20th International Joint Conference on Artificial
Intelligence, page 1795-1801. Hyderabad, India, (January 2007)