AI as a Weapon in the Cyber World
Build Trustworthy AI. Detect Misuse. Maintain Advantage
About This Course
A 3-day, defense-oriented program that links strategy (cognitive warfare, tempo, attribution) to hands-on defenses—authenticity verification (C2PA), adversarial/misuse detection, and hardened RAG pipelines—using no-cost, runnable toolchains, dashboards, and policies you can deploy immediately.
Aim
Enable participants to counter AI weaponization by verifying authenticity, detecting adversarial misuse, and hardening data-to-model-to-ops pipelines—using defense-in-depth practices and no-cost, runnable toolchains.
Workshop Objectives
-
Strategy: cognitive warfare, tempo, attribution
-
Authenticity: C2PA/metadata, deepfake triage
-
Threat mapping: poisoning, backdoors, injections, exfiltration
-
Pipeline hardening: RAG/agents with intake checks, canaries, policies
-
Detection & robustness: misuse detectors, anomaly scoring, ART/TextAttack/Foolbox
-
Ops & governance: KPIs, monitoring, incident response, AAR
Workshop Structure
📅 Day 1 – Strategy, OSINT & Authenticity
- Weaponization: capability × intent × doctrine; supply-chain risks
- Authenticity stack: C2PA, watermark limits, provenance graphs
- Hands-on: Provenance verifier (C2PA/EXIF + heuristics), benign OSINT graphing, deepfake triage notebook
- Free tools: SpiderFoot, theHarvester, recon-ng, exiftool, C2PA CLI, InVID-WeVerify, Python/Jupyter/OpenCV/librosa
📅 Day 2 – Adversarial AI & Model Security
- Threat surface: poisoning, backdoors, prompt/indirect injection, exfiltration
- Defense-in-depth; purple-team mapping to detections/controls
- Hands-on: Policy-driven two-pass RAG (local LLM), robustness demo (ART/TextAttack), telemetry anomaly scoring
- Free tools: IBM ART, TextAttack, Foolbox, scikit-learn, PyTorch, LangChain, LlamaIndex, Ollama/llama.cpp/GPT4All, promptfoo, Guardrails
📅 Day 3 – Detection, Dashboards & Policies
- Hands-on:
- LLM misuse detector + Streamlit mini-dashboard
- Provenance verifier (batch CLI) with policy actions
- RAG intake hardening (hash/MIME checks, denylists, canaries)
- Optional: autoencoder vs IsolationForest comparison
- Free tools: CICIDS-2017/UNSW-NB15, scikit-learn, PyTorch, Streamlit/Grafana, Zeek (optional)
Who Should Enrol?
-
PhD scholars, postgraduates, and senior undergraduates in AI/CS/Cybersecurity
-
Professors, researchers, and lab leads working on AI security or policy
-
Security architects, red/purple-teamers, SOC/DFIR analysts, and threat researchers
-
ML/AI engineers, data scientists, and platform/MLOps engineers in safety-critical domains
-
Government/defense, CERTs, and critical-infrastructure practitioners
-
Product/Policy leaders responsible for safe AI deployment and governance
Important Dates
Registration Ends
10/30/2025
IST 8 : 00 AM
Workshop Dates
10/30/2025 – 11/01/2025
IST 9 : 00 AM
Workshop Outcomes
-
Strategy lens: cognitive warfare, tempo, attribution
-
Authenticity & provenance (C2PA), deepfake triage
-
Defense-in-depth: data → model → agent → ops
-
Hardened RAG/agents with policy controls & canaries
-
Adversarial threats handled: poisoning/backdoors/injections/exfil
-
Operational detectors & robustness checks (ART/TextAttack/Foolbox)
Meet Your Mentor(s)

Fee Structure
Student
₹1999 | $60
Ph.D. Scholar / Researcher
₹2999 | $70
Academician / Faculty
₹3999 | $80
Industry Professional
₹5999 | $100
What You’ll Gain
- Live & recorded sessions
- e-Certificate upon completion
- Post-workshop query support
- Hands-on learning experience
Join Our Hall of Fame!
Take your research to the next level with NanoSchool.
Publication Opportunity
Get published in a prestigious open-access journal.
Centre of Excellence
Become part of an elite research community.
Networking & Learning
Connect with global researchers and mentors.
Global Recognition
Worth ₹20,000 / $1,000 in academic value.
View All Feedbacks →
