A very common workflow is to index some data based on its embeddings and then given a new query embedding retrieve the most similar examples with k-Nearest Neighbor search. For example, you can imagine embedding a large collection of papers by their abstracts and then given a new paper of interest retrieve the most similar papers to it.
TLDR in my experience it ~always works better to use an SVM instead of kNN, if you can afford the slight computational hit
Wikipedia-based question answering system for natural language questions, open topic model, Wiki, Wikipedia, Knowledge Enhanced Embodied Cognitive Interaction Technology.
Recently I've had the privilege of working with colleagues at Lexis Nexis on a variety of projects in the area of artificial intelligence and natural language processing. So I am pleased to share with you the following paper, which has been accepted for presentation at the 16th International Conference on Artificial Intelligence and Law in…
Compute answers using Wolfram's breakthrough technology & knowledgebase, relied on by millions of students & professionals. For math, science, nutrition, history, geography, engineering, mathematics, linguistics, sports, finance, music....
Z. Yang, P. Qi, S. Zhang, Y. Bengio, W. Cohen, R. Salakhutdinov, и C. Manning. Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, стр. 2369--2380. Brussels, Belgium, Association for Computational Linguistics, (2018)
A. Asai, K. Hashimoto, H. Hajishirzi, R. Socher, и C. Xiong. (2019)cite arxiv:1911.10470Comment: Published as a conference paper at ICLR 2020. Code is available at https://github.com/AkariAsai/learning_to_retrieve_reasoning_paths.