New Year Offer End Date: 30th April 2024
cf763983 download
Program

AI as a Weapon in the Cyber World

Build Trustworthy AI. Detect Misuse. Maintain Advantage

Skills you will gain:

About Program:

A 3-day, defense-oriented program that links strategy (cognitive warfare, tempo, attribution) to hands-on defenses—authenticity verification (C2PA), adversarial/misuse detection, and hardened RAG pipelines—using no-cost, runnable toolchains, dashboards, and policies you can deploy immediately.

Aim: Enable participants to counter AI weaponization by verifying authenticity, detecting adversarial misuse, and hardening data-to-model-to-ops pipelines—using defense-in-depth practices and no-cost, runnable toolchains.

Program Objectives:

  • Strategy: cognitive warfare, tempo, attribution

  • Authenticity: C2PA/metadata, deepfake triage

  • Threat mapping: poisoning, backdoors, injections, exfiltration

  • Pipeline hardening: RAG/agents with intake checks, canaries, policies

  • Detection & robustness: misuse detectors, anomaly scoring, ART/TextAttack/Foolbox

  • Ops & governance: KPIs, monitoring, incident response, AAR

What you will learn?

📅 Day 1 – Strategy, OSINT & Authenticity

  • Weaponization: capability × intent × doctrine; supply-chain risks
  • Authenticity stack: C2PA, watermark limits, provenance graphs
  • Hands-on: Provenance verifier (C2PA/EXIF + heuristics), benign OSINT graphing, deepfake triage notebook
  • Free tools: SpiderFoot, theHarvester, recon-ng, exiftool, C2PA CLI, InVID-WeVerify, Python/Jupyter/OpenCV/librosa

📅 Day 2 – Adversarial AI & Model Security

  • Threat surface: poisoning, backdoors, prompt/indirect injection, exfiltration
  • Defense-in-depth; purple-team mapping to detections/controls
  • Hands-on: Policy-driven two-pass RAG (local LLM), robustness demo (ART/TextAttack), telemetry anomaly scoring
  • Free tools: IBM ART, TextAttack, Foolbox, scikit-learn, PyTorch, LangChain, LlamaIndex, Ollama/llama.cpp/GPT4All, promptfoo, Guardrails

📅 Day 3 – Detection, Dashboards & Policies

  • Hands-on:
    • LLM misuse detector + Streamlit mini-dashboard
    • Provenance verifier (batch CLI) with policy actions
    • RAG intake hardening (hash/MIME checks, denylists, canaries)
    • Optional: autoencoder vs IsolationForest comparison
  • Free tools: CICIDS-2017/UNSW-NB15, scikit-learn, PyTorch, Streamlit/Grafana, Zeek (optional)

Mentor Profile

IT Professional Mentor
View more

Fee Plan

INR 1999 /- OR USD 50

Get an e-Certificate of Participation!

2024Certfiacte

Intended For :

  • PhD scholars, postgraduates, and senior undergraduates in AI/CS/Cybersecurity

  • Professors, researchers, and lab leads working on AI security or policy

  • Security architects, red/purple-teamers, SOC/DFIR analysts, and threat researchers

  • ML/AI engineers, data scientists, and platform/MLOps engineers in safety-critical domains

  • Government/defense, CERTs, and critical-infrastructure practitioners

  • Product/Policy leaders responsible for safe AI deployment and governance

Career Supporting Skills

Program Outcomes

  • Strategy lens: cognitive warfare, tempo, attribution

  • Authenticity & provenance (C2PA), deepfake triage

  • Defense-in-depth: data → model → agent → ops

  • Hardened RAG/agents with policy controls & canaries

  • Adversarial threats handled: poisoning/backdoors/injections/exfil

  • Operational detectors & robustness checks (ART/TextAttack/Foolbox)