As data scientists, we spend a lot of our time doing exploratory data analysis (EDA), cleaning data and making sure the data we use to generate insights is of good quality. Have you ever found…
This talk explores the integration of Knowledge Graphs (KGs) and Large Language Models (LLM) to harness their combined power for improved natural language understanding. By leveraging KGs' structured knowledge and language models' text comprehension abilities, we can leverage the domain-specific–and potentially sensitive–data together with the general knowledge of LLMs.
We also examine how language models can enhance KGs through knowledge extraction and refinement. The integration of these technologies presents opportunities in various domains, from question-answering to chatbots, fostering more intelligent and context-aware applications.
I recently created a demo for some prospective clients of mine, demonstrating how to use Large Language Models (LLMs) together with graph databases like Neo4J.
The two have a lot of interesting interactions, namely that you can now create knowledge graphs easier than ever before, by having AI find the graph entities and relationships from your unstructured data, rather than having to do all that manually.
On top of that, graph databases also have some advantages for Retrieval Augmented Generation (RAG) applications compared to vector search, which is currently the prevailing approach to RAG.
One of the key enablers of the ChatGPT magic can be traced back to 2017 under the obscure name of reinforcement learning with human feedback(RLHF).
Large language models(LLMs) have become one of the most interesting environments for applying modern reinforcement learning(RL) techniques. While LLMs are great at deriving knowledge from vast amounts of text, RL can help to translate that knowledge into actions. That has been the secret behind RLHF.
In this article, we will explore how we can use Llama2 for Topic Modeling without the need to pass every single document to the model. Instead, we will leverage BERTopic, a modular topic modeling technique that can use any LLM for fine-tuning topic representations.
Large language models (LLMs) have proven to be valuable tools, but they often lack reliability. Many instances have surfaced where LLM-generated responses included false information. Specifically…
Learn Prompting is the largest and most comprehensive course in prompt engineering available on the internet, with over 60 content modules, translated into 9 languages, and a thriving community.
A. Jaiswal, S. Singh, and S. Tripathy. 2023 14th International Conference on Computing Communication and Networking Technologies (ICCCNT), page 1-6. IEEE, (July 2023)
C. Tahri, X. Tannier, and P. Haouat. Proceedings of the first Workshop on Information Extraction from Scientific Publications, page 67--77. Online, Association for Computational Linguistics, (November 2022)