NSTC Logo
Home >Courses >Hands-On Workshop: Building a RAG-Powered Q&A Bot

06/28/2025

Registration closes 06/28/2025
Mentor Based

Hands-On Workshop: Building a RAG-Powered Q&A Bot

Unlock the Power of AI with a Hands-On Workshop: Build, Deploy, and Optimize Your Own RAG-Powered Q&A Bot

  • Mode: Virtual / Online
  • Type: Mentor Based
  • Level: Advanced
  • Duration: 1 Day
  • Starts: 28 June 2025
  • Time: 5 PM IST

About This Course

This hands-on workshop teaches participants how to build, deploy, and optimize a RAG-powered Q&A bot. Attendees will learn to set up the environment, ingest and embed documents, implement vector retrieval, and integrate a language model via FastAPI, with a focus on testing and performance optimization.

Aim

The aim of this workshop is to equip participants with the skills to build, deploy, and optimize a RAG-powered Q&A bot, covering document ingestion, vector retrieval, and integration of language models through a FastAPI endpoint.

Workshop Structure

  • Environment & Dependencies Setup

    Ensure the environment is properly configured for the project by following these steps:

    • Spin up a Python virtual environment (venv) for isolation.

    • Install necessary dependencies using pip:

      bash
      pip install langchain faiss-cpu openai dotenv fastapi uvicorn
    • Create a .env file to store your API key for OpenAI integration.


    Document Ingestion & Embedding

    Integrate document ingestion and text embedding with the following process:

    • Write a script to load sample text/PDFs from the ./docs/ directory.

    • Chunk the texts into manageable segments (e.g., 500 tokens per chunk).

    • Use OpenAIEmbeddings to generate embeddings and store them in FAISS for efficient retrieval.


    Vector Store & Retrieval Function

    Set up the vector store and implement a retrieval function:

    • Initialize the FAISS index in your code for storing and searching vectors.

    • Implement the retrieve(query): function using index.similarity_search.

    • Perform quick tests by printing out retrieved chunks for sample queries.


    QA Chain Implementation

    Define a QA chain that utilizes the retrieved context for answering questions:

    • Create a prompt template that dynamically injects the retrieved context into the model.

    • Integrate the LLMChain (or RetrievalQA from LangChain) to process the query.
      Example code:

      docs = retrieve(user_question)
      answer = llm_chain.predict(context=docs, question=user_question)
    • Test the system by asking 2–3 different questions to check if it retrieves accurate answers.


    API Endpoint / Minimal Interface

    Expose the solution via an API:

    • Scaffold a FastAPI app with a /qa POST endpoint.

    • Integrate the retrieve and llm_chain inside this endpoint for real-time querying.

    • Test the API via curl or Postman to ensure functionality.


    Testing, Debugging & Extensions

    Ensure the system is robust and extendable:

    • Handle cases where no results are found: return a message like “No context found.”

    • Experiment with different chunk sizes and k-values in retrieval for performance tuning.

    • Conduct a performance check: Measure latency for sample queries and optimize as necessary.

Who Should Enrol?

This workshop is intended for developers, data scientists, and AI enthusiasts with a basic understanding of Python programming and machine learning concepts. Familiarity with APIs, natural language processing (NLP), and working with vector databases like FAISS is beneficial but not required.

Important Dates

Registration Ends

06/28/2025
IST

Workshop Dates

06/28/2025 – 06/28/2025
IST 5 PM

Workshop Outcomes

  • Gain practical experience in building a RAG-powered Q&A bot using Python, LangChain, and FAISS.

  • Learn how to ingest, process, and embed documents for retrieval-based systems.

  • Understand how to implement vector retrieval functions and integrate them with language models.

  • Gain hands-on experience in deploying a Q&A bot via FastAPI.

  • Acquire skills in debugging, performance optimization, and handling edge cases in real-world applications.

  • Develop the ability to create and deploy intelligent Q&A systems for various use cases.

Meet Your Mentor(s)

Dr Shiv Kumar Verma

Professor

Sharda Institute of Engineering & Technology

more


Fee Structure

Student

₹999 | $29

Ph.D. Scholar / Researcher

₹1499 | $39

Academician / Faculty

₹1999 | $45

Industry Professional

₹2499 | $65

What You’ll Gain

  • Live & recorded sessions
  • e-Certificate upon completion
  • Post-workshop query support
  • Hands-on learning experience

Join Our Hall of Fame!

Take your research to the next level with NanoSchool.

Publication Opportunity

Get published in a prestigious open-access journal.

Centre of Excellence

Become part of an elite research community.

Networking & Learning

Connect with global researchers and mentors.

Global Recognition

Worth ₹20,000 / $1,000 in academic value.

Need Help?

We’re here for you!


(+91) 120-4781-217

★★★★★
Scientific Paper Writing: Tools and AI for Efficient and Effective Research Communication

Very much informative

GEETA BRIJWANI
★★★★★
Prediction of Protein Structure Using AlphaFold: An Artificial Intelligence (AI) Program

New directions for thinking

Sher Singh
★★★★★
AI for Environmental Monitoring and Sustainablility

Great mentor!

Mladen Kulev
★★★★★
Prediction of Protein Structure Using AlphaFold: An Artificial Intelligence (AI) Program

Thanks for the very attractive topics and excellent lectures. I think it would be better to include more application examples/software.

Yujia Wu

View All Feedbacks →

Stay Updated


Join our mailing list for exclusive offers and course announcements

Ai Subscriber

>