Unified analysis across your IaC toolchain
Features
Twelve functional requirements shipped complete — not a stripped-down MVP.
Plain-English risk story with specific resource names. Cross-tool interaction analysis. GO / CAUTION / NO-GO recommendation.
0-100 weighted score with resource multipliers, environment detection (prod 2x), and action weights (destroy 1.0, modify 0.6).
Dependency graph with BFS traversal showing direct and transitive affected services from each changed resource.
Step-by-step rollback with time estimates and critical path flagging. Complexity score 1-5.
Embedding similarity against past postmortems. 70%+ match triggers warning with historical incident context.
Unified change schema across 5 tools. Auto-detects tool type. Extracts resource ID, action, fields, and risk weight.
Claude, OpenAI, Ollama, Groq, Azure. Swap with one env var via LiteLLM. Ollama for full air-gap.
Every report in SQLite. Browse, compare, export JSON. Trend data shows deployment safety over time.
FastAPI with OpenAPI schema. CI-friendly JSON output. Headless: python -m deploywhisper analyze plan.json
How it works
Drag and drop Terraform plans, K8s manifests, Ansible playbooks, Jenkinsfiles, or CloudFormation templates. Auto-detects each tool and loads the matching AI Skill.
Unlike generic LLMs, DeployWhisper injects curated domain knowledge for each tool. The Terraform skill knows apply_immediately triggers a reboot. The K8s skill knows maxUnavailable: 100% means full outage.
All parsing happens locally. Only structured metadata is sent to the LLM — never raw file content.
Risk score, narrative, blast radius, and rollback plan — all on one screen. Verdict above the fold: GO, CAUTION, or NO-GO.
Reports auto-persist to SQLite. Share via link, export JSON, compare with past analyses.
AI Skills engine
Curated knowledge modules — risk patterns, failure modes, best practices — injected into the LLM context per analysis.
Skills are markdown files — add your own team-specific knowledge without writing Python.
Use cases
Upload your Terraform plan and K8s manifests. Get a risk narrative, blast radius, and rollback plan in 15 seconds. Fix flagged issues, re-analyze, watch the score drop.
Review shared report links from teammates. Check blast radius graph, incident match history. Make the call with full context, not gut feel.
Submit infrastructure PRs with confidence. DeployWhisper explains in plain English what your changes do — learning months of tribal knowledge instantly.
GitHub Action POSTs changed files to /analyze, gets JSON risk report, adds PR comment. Advisory only — never blocks, humans decide.
Security-first architecture
Five non-negotiable hard lines baked into the architecture.
Parsers extract metadata locally. File content never reaches external APIs.
Credentials in env vars or session memory. Never on disk or in logs.
.env, keys, kubeconfig auto-detected and excluded from LLM payload.
Ollama local deployment. Zero network egress. Works fully offline.
Intelligence, not authorization. No mode can prevent deployment.
Self-hosted. Single-team. Your data stays on your infrastructure.
No JavaScript, no React, no npm. One language, one install, one mental model.