
Build Intelligent AI Apps with Retrieval-Augmented Generation (RAG)
Python, Basics of Machine Learning/NLP, REST APIs
Skills you will gain:
About Program:
Build Intelligent AI Apps with Retrieval-Augmented Generation (RAG) is a cutting-edge international workshop focused on one of the most powerful emerging techniques in AI today. RAG combines the generative power of LLMs with retrieval systems like vector databases to deliver contextually relevant, up-to-date, and verifiable outputs.
Participants will learn to architect and implement RAG pipelines, work with tools like LangChain, Haystack, Pinecone, FAISS, and integrate them with LLMs (OpenAI, Cohere, DeepSeek, or Hugging Face models) for real-world use cases in enterprise automation, research assistants, chatbots, legal tech, and knowledge engines.
Aim: To empower participants with the practical skills and conceptual understanding needed to design, build, and deploy Retrieval-Augmented Generation (RAG) pipelines that combine Large Language Models (LLMs) with dynamic, up-to-date data sources for intelligent, trustworthy AI applications.
Program Objectives:
- Demystify Retrieval-Augmented Generation and its components
- Enable real-world implementation with popular AI toolkits
- Teach prompt injection prevention, grounding, and retrieval optimization
- Stimulate innovation in building knowledge-enhanced LLM apps
- Guide participants to build and deploy a fully functional AI app with RAG
What you will learn?
Day 1: Introduction & Fundamentals of RAG
Title: RAG 101 – Foundation of Retrieval-Augmented Generation
- Overview of Language Models (LLMs) & Limitations
- Why RAG? – Bridging LLMs with External Knowledge
- Anatomy of a RAG System: Retriever + Generator
- Tools: FAISS, ChromaDB, LangChain, HuggingFace, OpenAI
Hands-on:
- Build a vector store using FAISS/Chroma
- Embed documents using SentenceTransformers or OpenAI Embeddings
Day 2: Building the RAG Pipeline
Title: Designing the RAG Workflow
- Types of Retrievers: Dense vs Sparse
- Query Rewriting, Chunking & Indexing
- LangChain vs Haystack vs Custom RAG Pipelines
- Prompt Engineering in RAG
Hands-on:
- Build a RAG pipeline using LangChain
- Query documents and generate contextual answers using GPT
Day 3: Advanced Features & Deployment
Title: Productizing Your RAG Application
- Evaluation Metrics for RAG Systems (EM, BLEU, Recall@k)
- Caching, Logging, and Observability
- Securing APIs & Cost Management
- Open-source RAG use cases (e.g., DocChat, Chat with PDF, Legal AI, Academic Copilots)
Hands-on:
- Deploy a working RAG chatbot on Gradio
- Custom PDF/CSV Q&A assistant (bring-your-own-data)
Mentor Profile
Fee Plan
Get an e-Certificate of Participation!

Intended For :
- AI/ML developers and data scientists
- Backend or full-stack developers building AI integrations
- NLP researchers and LLM enthusiasts
- LegalTech, HealthTech, and EdTech innovators
- Students with a working knowledge of Python and NLP
Career Supporting Skills
Program Outcomes
- Master RAG pipeline design and deployment
- Learn to embed documents and retrieve relevant context dynamically
- Combine search and generation seamlessly for fact-rich AI applications
- Build an intelligent AI assistant that can reason over documents or databases
- Gain hands-on experience with LangChain, FAISS, Pinecone, and Hugging Face
- Receive international workshop certification and reusable project templates
