Build Intelligent AI Apps with Retrieval-Augmented Generation (RAG)
Python, Basics of Machine Learning/NLP, REST APIs
About This Course
Build Intelligent AI Apps with Retrieval-Augmented Generation (RAG) is a cutting-edge international workshop focused on one of the most powerful emerging techniques in AI today. RAG combines the generative power of LLMs with retrieval systems like vector databases to deliver contextually relevant, up-to-date, and verifiable outputs.
Participants will learn to architect and implement RAG pipelines, work with tools like LangChain, Haystack, Pinecone, FAISS, and integrate them with LLMs (OpenAI, Cohere, DeepSeek, or Hugging Face models) for real-world use cases in enterprise automation, research assistants, chatbots, legal tech, and knowledge engines.
Aim
To empower participants with the practical skills and conceptual understanding needed to design, build, and deploy Retrieval-Augmented Generation (RAG) pipelines that combine Large Language Models (LLMs) with dynamic, up-to-date data sources for intelligent, trustworthy AI applications.
Workshop Objectives
- Demystify Retrieval-Augmented Generation and its components
- Enable real-world implementation with popular AI toolkits
- Teach prompt injection prevention, grounding, and retrieval optimization
- Stimulate innovation in building knowledge-enhanced LLM apps
- Guide participants to build and deploy a fully functional AI app with RAG
Workshop Structure
Day 1: Introduction & Fundamentals of RAG
Title: RAG 101 – Foundation of Retrieval-Augmented Generation
- Overview of Language Models (LLMs) & Limitations
- Why RAG? – Bridging LLMs with External Knowledge
- Anatomy of a RAG System: Retriever + Generator
- Tools: FAISS, ChromaDB, LangChain, HuggingFace, OpenAI
Hands-on:
- Build a vector store using FAISS/Chroma
- Embed documents using SentenceTransformers or OpenAI Embeddings
Day 2: Building the RAG Pipeline
Title: Designing the RAG Workflow
- Types of Retrievers: Dense vs Sparse
- Query Rewriting, Chunking & Indexing
- LangChain vs Haystack vs Custom RAG Pipelines
- Prompt Engineering in RAG
Hands-on:
- Build a RAG pipeline using LangChain
- Query documents and generate contextual answers using GPT
Day 3: Advanced Features & Deployment
Title: Productizing Your RAG Application
- Evaluation Metrics for RAG Systems (EM, BLEU, Recall@k)
- Caching, Logging, and Observability
- Securing APIs & Cost Management
- Open-source RAG use cases (e.g., DocChat, Chat with PDF, Legal AI, Academic Copilots)
Hands-on:
- Deploy a working RAG chatbot on Gradio
- Custom PDF/CSV Q&A assistant (bring-your-own-data)
Who Should Enrol?
- AI/ML developers and data scientists
- Backend or full-stack developers building AI integrations
- NLP researchers and LLM enthusiasts
- LegalTech, HealthTech, and EdTech innovators
- Students with a working knowledge of Python and NLP
Important Dates
Registration Ends
05/15/2025
IST 6:00 PM
Workshop Dates
05/15/2025 – 05/17/2025
IST 7 PM
Workshop Outcomes
- Master RAG pipeline design and deployment
- Learn to embed documents and retrieve relevant context dynamically
- Combine search and generation seamlessly for fact-rich AI applications
- Build an intelligent AI assistant that can reason over documents or databases
- Gain hands-on experience with LangChain, FAISS, Pinecone, and Hugging Face
- Receive international workshop certification and reusable project templates
Fee Structure
Student
₹1999 | $50
Ph.D. Scholar / Researcher
₹2999 | $55
Academician / Faculty
₹3999 | $60
Industry Professional
₹5999 | $75
What You’ll Gain
- Live & recorded sessions
- e-Certificate upon completion
- Post-workshop query support
- Hands-on learning experience
Join Our Hall of Fame!
Take your research to the next level with NanoSchool.
Publication Opportunity
Get published in a prestigious open-access journal.
Centre of Excellence
Become part of an elite research community.
Networking & Learning
Connect with global researchers and mentors.
Global Recognition
Worth ₹20,000 / $1,000 in academic value.
View All Feedbacks →
