This course provides a practical introduction to building powerful Retrieval Augmented Generation (RAG) agents using a popular and flexible technology stack. Participants will learn to leverage Large Language Models (LLMs) in conjunction with external knowledge sources to create applications capable of informed conversation and task execution. Inspired by the advancements in retrieval-based systems, this workshop focuses on hands-on implementation and efficient design patterns for building RAG agents. We will explore how to orchestrate LLM calls, manage conversational flow, integrate document retrieval, and build interactive interfaces.
This course provides hands-on experience building RAG agents utilizing key technologies such as:
- LangChain: A framework for developing applications powered by language models.
- LangGraph: A library for building robust and stateful multi-actor applications with LLMs, built on top of LangChain.
- ChromaDB: An open-source vector database for storing and searching document embeddings.
- Streamlit: A Python library for creating and sharing beautiful, custom web apps for machine learning and data science.
- LLMs: Large language models (using API access to models).
Upon completion of this course, participants will be able to:
- Understand the core concepts of Retrieval Augmented Generation (RAG) and its benefits.
- Utilize LangChain to build foundational components for LLM applications, including document loading, text splitting, and embeddings.
- Implement and manage a vector store using ChromaDB for efficient document retrieval.
- Design and build stateful RAG agents using LangGraph to manage complex conversational flows and integrate external tools.
- Create interactive web interfaces for RAG agents using Streamlit.
- Integrate LLMs effectively within a RAG framework to generate contextually relevant responses.
- Gain insights into evaluating the performance of RAG agents.
Curriculum
- 5 Sections
- 38 Lessons
- 14 Hours
- Introduction to RAG and the Toolset9
- 1.1Introduction to the RAG concept
- 1.2Presentation of the used technologies: LangChain, LangGraph, ChromaDB, Streamlit, and LLMs API
- 1.3Setting up the development environment
- 1.4Understanding the components of a RAG system: Retriever and Generator
- 1.5Discussing different retrieval strategies
- 1.6Exploring the role of embeddings and vector databases
- 1.7Loading and processing documents
- 1.8Splitting documents into smaller chunks for effective retrieval
- 1.9Generating document embeddings using various embedding models
- Building the Retrieval System with ChromaDB9
- 2.1Concepts of vector embeddings and similarity search
- 2.2Introduction to ChromaDB as a persistent vector store
- 2.3Setting up and initializing a ChromaDB collection
- 2.4Loading documents and creating embeddings using LangChain
- 2.5Storing document chunks and their embeddings in ChromaDB
- 2.6Discussing strategies for handling large datasets
- 2.7Querying the ChromaDB index for relevant document chunks based on user input
- 2.8Exploring different similarity search methods provided by ChromaDB
- 2.9Integrating ChromaDB with LangChain’s retriever interface
- Orchestrating the RAG Agent with LangGraph9
- 3.1The need for state management and orchestration in complex LLM applications
- 3.2Presentation of LangGraph’s state-based approach and graph representation
- 3.3Defining states and nodes in a LangGraph application
- 3.4Creating nodes for document retrieval and LLM generation
- 3.5Running a simple RAG query through the LangGraph
- 3.6Managing conversation history within the LangGraph state
- 3.7Handling multi-turn conversations
- 3.8Introducing the concept of tools for LLM agents
- 3.9Enable the agent to interact with external systems
- Building the User Interface with Streamlit6
- 4.1Creating simple Streamlit applications
- 4.2Integrating the LangGraph agent execution within a Streamlit application
- 4.3Passing user input from Streamlit to the LangGraph
- 4.4Displaying the agent’s responses in the Streamlit interface
- 4.5Displaying source documents for retrieved information
- 4.6Creating a conversational chat interface
- Advanced Topics and Evaluation5
- 5.1Discussing techniques like HyDE, re-ranking, and query expansion
- 5.2Considering different LLM prompting strategies for RAG
- 5.3Metrics for evaluating retrieval quality and generation quality
- 5.4Discussing methods for testing and debugging RAG agents
- 5.5Options for deploying Streamlit applications and the RAG agent
I’d like to thank you for the effort you invested during the workshop at the IBI2025 conference.
* The workshop was very beneficial to my studies.
* You were engaged and very helpful.
* You showed good skills in your field.
great workshop with very helpful information on understanding and working on RAG and LLMs
Great workshop! The content was clear and practical, especially the part on RAG and LLM integration.
I really appreciated the hands-on approach and would definitely join future sessions.
Thank you Dr Nabil Omri for the valuable insights and support throughout the workshop, it was well worth it.
Very insightful and instructive workshop!