New Year Offer End Date: 30th April 2024
elevated view woman s hand tearing paper glass container scaled
Program

Hands-On Workshop: Building a RAG-Powered Q&A Bot

Unlock the Power of AI with a Hands-On Workshop: Build, Deploy, and Optimize Your Own RAG-Powered Q&A Bot

Skills you will gain:

About Program:

This hands-on workshop teaches participants how to build, deploy, and optimize a RAG-powered Q&A bot. Attendees will learn to set up the environment, ingest and embed documents, implement vector retrieval, and integrate a language model via FastAPI, with a focus on testing and performance optimization.

Aim: The aim of this workshop is to equip participants with the skills to build, deploy, and optimize a RAG-powered Q&A bot, covering document ingestion, vector retrieval, and integration of language models through a FastAPI endpoint.

Program Objectives:

What you will learn?

  • Environment & Dependencies Setup

    Ensure the environment is properly configured for the project by following these steps:

    • Spin up a Python virtual environment (venv) for isolation.

    • Install necessary dependencies using pip:

      bash
      pip install langchain faiss-cpu openai dotenv fastapi uvicorn
    • Create a .env file to store your API key for OpenAI integration.


    Document Ingestion & Embedding

    Integrate document ingestion and text embedding with the following process:

    • Write a script to load sample text/PDFs from the ./docs/ directory.

    • Chunk the texts into manageable segments (e.g., 500 tokens per chunk).

    • Use OpenAIEmbeddings to generate embeddings and store them in FAISS for efficient retrieval.


    Vector Store & Retrieval Function

    Set up the vector store and implement a retrieval function:

    • Initialize the FAISS index in your code for storing and searching vectors.

    • Implement the retrieve(query): function using index.similarity_search.

    • Perform quick tests by printing out retrieved chunks for sample queries.


    QA Chain Implementation

    Define a QA chain that utilizes the retrieved context for answering questions:

    • Create a prompt template that dynamically injects the retrieved context into the model.

    • Integrate the LLMChain (or RetrievalQA from LangChain) to process the query.
      Example code:

      docs = retrieve(user_question)
      answer = llm_chain.predict(context=docs, question=user_question)
    • Test the system by asking 2–3 different questions to check if it retrieves accurate answers.


    API Endpoint / Minimal Interface

    Expose the solution via an API:

    • Scaffold a FastAPI app with a /qa POST endpoint.

    • Integrate the retrieve and llm_chain inside this endpoint for real-time querying.

    • Test the API via curl or Postman to ensure functionality.


    Testing, Debugging & Extensions

    Ensure the system is robust and extendable:

    • Handle cases where no results are found: return a message like “No context found.”

    • Experiment with different chunk sizes and k-values in retrieval for performance tuning.

    • Conduct a performance check: Measure latency for sample queries and optimize as necessary.

Mentor Profile

Professor Sharda Institute of Engineering & Technology
View more

Fee Plan

INR 1999 /- OR USD 50

Get an e-Certificate of Participation!

2024Certfiacte

Intended For :

This workshop is intended for developers, data scientists, and AI enthusiasts with a basic understanding of Python programming and machine learning concepts. Familiarity with APIs, natural language processing (NLP), and working with vector databases like FAISS is beneficial but not required.

Career Supporting Skills

Program Outcomes

  • Gain practical experience in building a RAG-powered Q&A bot using Python, LangChain, and FAISS.

  • Learn how to ingest, process, and embed documents for retrieval-based systems.

  • Understand how to implement vector retrieval functions and integrate them with language models.

  • Gain hands-on experience in deploying a Q&A bot via FastAPI.

  • Acquire skills in debugging, performance optimization, and handling edge cases in real-world applications.

  • Develop the ability to create and deploy intelligent Q&A systems for various use cases.