Threat Modeling

🎯 Threat Modeling: Know Thy Enemy (They Already Know You)

Public Threat Models: Every Project, Every Attack Vector, Every Mitigation

Nothing is true. Everything is permitted. Including publishing your complete threat models publicly because security through obscurity is security through hope.

Think for yourself. Question authority. At Hack23, we don't hide our threat models—we publish them. STRIDE complete. Attack trees documented. Mitigations mapped. Risk quantified. All public.

Every Hack23 project maintains comprehensive threat models: CIA (democratic engagement) | Black Trigram (educational gaming) | CIA Compliance Manager (security assessment). STRIDE analysis, MITRE ATT&CK mapping, AI/ML ATLAS integration for LLM threats—all documented, all public.

ILLUMINATION: Publishing threat models doesn't help attackers—they already know the attack vectors. It helps defenders learn systematic threat analysis. Choose transparent defense over obscure hope.

Our Threat Modeling Policy isn't theoretical—it's operational with public evidence across all projects. Security by design through systematic threat analysis.

STRIDE Complete: Six Threat Categories, Systematic Analysis

All Hack23 projects apply STRIDE framework systematically:

Threat TypeSecurity PropertyHack23 Evidence
S - SpoofingAuthenticationIdentity verification threats analyzed in all projects
T - TamperingIntegrityData modification threats with control mapping
R - RepudiationAccountabilityNon-deniable action threats with audit logging
I - Information DisclosureConfidentialityData exposure threats with encryption mitigations
D - Denial of ServiceAvailabilityService disruption threats with resilience controls
E - Elevation of PrivilegeAuthorizationPermission escalation threats with RBAC enforcement

Public STRIDE evidence:

STRIDE ILLUMINATION: Systematic threat categorization beats ad-hoc "what could go wrong" brainstorming. STRIDE ensures you check all six threat types. Missing one = vulnerability waiting to be exploited.

Know Thy Enemy: Adversary Modeling

Different adversaries, different capabilities, different motivations:

  • Script Kiddies — Low skill, automated tools, mass scans. Defend with basics.
  • Cybercriminals — Financially motivated, targeted, patient. Ransomware, exfiltration, fraud.
  • Insiders — Already have access. Your biggest threat that security teams ignore.
  • Competitors — Corporate espionage. They want your IP, your customers, your secrets.
  • Nation-States — Infinite resources, zero-days, supply chain access, legal compulsion. If they want in, they're in.

Your threat model should include the worst case—nation-state adversaries—because that sets your defensive baseline. If you can defend against APTs, script kiddies are noise.

CHAOS ILLUMINATION: If your threat model doesn't include nation-states, you're threat modeling for 1995.

AI/ML ATLAS: 15 Tactics, 81 Techniques for LLM Threats

For LLM-based systems, STRIDE isn't enough. AI/ML attacks require specialized threat framework:

🔍 Reconnaissance (6 techniques)

Search AI vulnerability databases, gather RAG-indexed targets, active scanning of ML endpoints. Attackers map your AI attack surface before striking.

🛠️ Resource Development (12 techniques)

Acquire public AI artifacts, develop adversarial capabilities, publish poisoned datasets/models. Supply chain attacks on training data and model registries.

⚡ Execution (4 techniques)

LLM Prompt Injection - Override system prompts. AI Agent Tool Invocation - Abuse autonomous agent capabilities. User execution via LLM-generated malicious content.

Prompt injection is SQL injection for LLMs. Same concept, new attack surface.

📤 Exfiltration (6 techniques)

LLM Data Leakage - Extract training data through careful prompting. Extract System Prompt - Reveal confidential instructions. Exfiltration via AI inference API, agent tool invocation.

💥 Impact (7 techniques)

Cost Harvesting - Burn API credits through expensive queries. Erode AI Model Integrity - Poison responses through adversarial inputs. External Harms - Reputation damage via model misbehavior.

Hack23 AI/ML threat analysis: Our OWASP LLM Security Policy integrates AI/ML ATLAS framework (15 tactics, 81 techniques) for systematic LLM threat modeling. AWS Bedrock deployment (Q1 2026) includes complete AI/ML threat assessment.

AI/ML ILLUMINATION: Traditional threat modeling misses LLM-specific attacks. Prompt injection, training data poisoning, model inversion—these require specialized frameworks. Choose AI/ML ATLAS for LLM systems.

Our Approach: Public Threat Models, Transparent Mitigations

Every Hack23 project maintains comprehensive public threat model:

ProjectSTRIDEAttack TreesRisk QuantificationControl Mapping
CIA PlatformCompleteDocumentedQuantifiedMapped
Black TrigramCompleteDocumentedQuantifiedMapped
CIA ComplianceCompleteDocumentedQuantifiedMapped

Threat modeling lifecycle at Hack23:

  • Design Phase Integration: Threat modeling during architecture design, not post-implementation
  • STRIDE Complete: All six threat categories analyzed systematically
  • Attack Trees: Visual attack path documentation from entry point to impact
  • MITRE ATT&CK Mapping: Threat intelligence integration with industry-standard frameworks
  • AI/ML ATLAS Integration: LLM-specific threats for AI-enabled systems
  • Risk Quantification: Business impact analysis tied to Classification Framework
  • Control Mapping: Every threat mapped to specific security controls
  • Public Documentation: Transparent threat analysis demonstrating expertise

Full methodology in our public Threat Modeling Policy—because security through obscurity is security through incompetence.

Welcome to Chapel Perilous: Threat Modeling as Competitive Advantage

Nothing is true. Everything is permitted. Including publishing complete threat models because transparency demonstrates expertise, not vulnerability.

Most organizations hide threat models behind NDAs. We publish ours: STRIDE complete. Attack trees documented. Mitigations mapped. Risk quantified. All public. CIA Platform, Black Trigram, CIA Compliance Manager—every project has comprehensive threat model.

Our threat modeling approach:

  • STRIDE Framework: Six threat categories (Spoofing, Tampering, Repudiation, Information Disclosure, Denial of Service, Elevation of Privilege)
  • AI/ML ATLAS: 15 tactics, 81 techniques for LLM-specific threats (prompt injection, training data poisoning, model inversion)
  • MITRE ATT&CK: Industry-standard threat intelligence integration
  • Public Documentation: Transparent threat analysis in every project repository
  • Control Mapping: Every threat mapped to specific security controls
  • Continuous Assessment: Threat models updated with architectural changes

Think for yourself. Question vendors who hide threat models. Ask for public STRIDE analysis. Demand attack tree documentation. Choose transparent threat assessment over obscure hand-waving.

ULTIMATE ILLUMINATION: You are now in Chapel Perilous. Every system is attackable. Publishing threat models doesn't help attackers—they already know the attack vectors. It helps defenders learn systematic threat analysis. Choose transparency over obscurity.

All hail Eris! All hail Discordia!

Explore our complete Threat Modeling Policy with STRIDE framework, AI/ML ATLAS integration, MITRE ATT&CK mapping, and public threat model evidence for all projects. Transparent. Systematic. Operational.

— Hagbard Celine, Captain of the Leif Erikson

"STRIDE complete. Attack trees documented. Mitigations mapped. All public. Choose systematic threat analysis over ad-hoc fear."

🍎 23 FNORD 5