World of LLMs: From Architecture to Application
International Workshop on Understanding, Building & Applying Large Language Models
Virtual / Online
Mentor Based
Moderate
3 Days
28 -April -2025
5 PM IST
About
The World of LLMs is a deep-dive international workshop designed for learners, developers, and researchers eager to explore the inner workings and applications of LLMs. From understanding the foundational Transformer architecture and tokenization strategies, to exploring fine-tuning, alignment, and practical implementations using tools like Hugging Face, OpenAI APIs, DeepSeek, and LangChain, this hands-on program bridges theory and industry use.
Participants will gain clarity on LLM training paradigms, ethical challenges, prompt engineering techniques, and how to integrate LLMs into apps that power chatbots, summarizers, content creators, and intelligent assistants.
Aim
To introduce and train participants on the complete lifecycle of Large Language Models (LLMs)—from their transformer-based architectures to real-world deployment—empowering them to build intelligent AI-driven applications.
Workshop Objectives
- Decode the architecture of modern LLMs
- Empower participants to interact with and fine-tune LLMs
- Guide the development of context-aware and safe AI tools
- Promote responsible, explainable, and impactful use of LLMs
- Enable learners to become creators—not just consumers—of LLM technologies
Workshop Structure
🔹 Day 1: Foundations and Architectures of Large Language Models
Objective:
To establish a strong conceptual understanding of the architecture, functionality, and evolution of large language models (LLMs).
Topics Covered:
- Introduction to Transformer Architecture and Self-Attention
- Comparative Overview: GPT, BERT, T5, LLaMA
- Tokenization, Embeddings, and Positional Encoding
- Scaling Laws and the Role of Pretraining
- Open-Source LLM Ecosystem (Hugging Face, Meta AI, Google AI)
Hands-On Component:
- Loading and running inference on a pre-trained LLM using Hugging Face Transformers (Colab)
- Visualizing attention maps and token flows in a transformer model
🔹 Day 2: Prompt Engineering and Model Adaptation Techniques
Objective:
To develop practical skills in interacting with LLMs using advanced prompting strategies and to explore lightweight model adaptation approaches.
Topics Covered:
- Prompt Engineering: Zero-Shot, Few-Shot, and Chain-of-Thought
- Prompt Templates and Output Formatting Strategies
- Fine-Tuning Approaches: Full, Instructional, and Parameter-Efficient (LoRA, PEFT)
- Dataset Preparation and Prompt Dataset Curation
Hands-On Component:
- Designing and testing prompts for classification, summarization, and Q&A tasks
- Performing fine-tuning or LoRA-based tuning on a small-scale LLM (DistilGPT2 or equivalent)
- Evaluating task-specific performance changes
🔹 Day 3: Application Development, Evaluation, and Responsible AI
Objective:
To apply LLMs in real-world applications, understand evaluation strategies, and address ethical and safety challenges in model deployment.
Topics Covered:
- Building LLM Applications using LangChain, LlamaIndex, and Gradio
- Evaluation Metrics: BLEU, ROUGE, Perplexity, and Human Evaluation
- Hallucinations, Bias, and Red-Teaming in LLM Outputs
- Principles of Responsible and Explainable AI in LLMs
- Future Trends: Multimodal LLMs, Retrieval-Augmented Generation (RAG), and Autonomous Agents
Hands-On Component:
- Building a document-based Q&A assistant using LangChain and an open-source LLM
- Creating a simple front-end using Gradio for user interaction
- Analyzing and evaluating output quality and response reliability
Intended For
- AI/ML professionals and researchers
- Developers building AI-first applications
- Students pursuing computer science, data science, or AI
- Tech educators and startup founders exploring LLM integration
- Anyone with Python/NLP basics seeking LLM fluency
Important Dates
Registration Ends
2025-04-28
Indian Standard Timing 3:00 PM
Workshop Dates
2025-04-28 to 2025-04-30
Indian Standard Timing 5 PM
Workshop Outcomes
- Deep understanding of transformer-based LLMs
- Build and deploy LLM-powered applications
- Design and optimize prompts for various generative tasks
- Evaluate and compare open-source vs API-based LLMs
- Apply ethical considerations and safeguards in real-world use
- Earn a verified international certification
Mentor Profile
Fee Structure
List of Currencies
FOR QUERIES, FEEDBACK OR ASSISTANCE
Key Takeaways
- Access to Live Lectures
- Access to Recorded Sessions
- e-Certificate
- Query Solving Post Workshop

Future Career Prospects
Skillsets gained from this workshop align with emerging roles such as:
- LLM Architect / Engineer
- AI App Developer
- NLP Researcher
- Prompt Engineer
- Applied AI Product Designer
- Conversational AI and Agent Developer
Job Opportunities
- Generative AI companies (OpenAI, Cohere, Mistral, Anthropic, DeepSeek)
- SaaS and enterprise automation platforms
- Healthcare, legal, and education sectors applying LLMs to structured/unstructured data
- AI research labs and NLP startups
- Customer support automation, EdTech, and digital content industries
Enter the Hall of Fame!
Take your research to the next level!
Achieve excellence and solidify your reputation among the elite!
Related Courses

Life Cycle Assessment for …

Digital Pathology

Advanced Data Visualization …

In Silico Molecular Modeling …
Recent Feedbacks In Other Workshops
Thank you for your effort. You did a great job.
Good workshop
informative