Free & open source — MIT License

Know the risk before
you hit deploy

AI pre-deployment intelligence that analyzes Terraform, Kubernetes, Ansible, Jenkins & CloudFormation together — with tool-specific expertise no generic LLM has.

$ docker compose up -d

Runs as a single Docker container. See the GitHub repo for setup instructions.

Unified analysis across your IaC toolchain

Terraform Kubernetes Ansible Jenkins CloudFormation Docker Git

Features

Everything you need for safe deployments

Twelve functional requirements shipped complete — not a stripped-down MVP.

<15s
Analysis time
Full narrative + blast radius + rollback plan
5
IaC tools unified
Terraform, K8s, Ansible, Jenkins, CloudFormation
7
AI Skills loaded
Tool-specific expertise per analysis
100%
Local parsing
Raw IaC never sent to external LLMs

AI risk narrative

Plain-English risk story with specific resource names. Cross-tool interaction analysis. GO / CAUTION / NO-GO recommendation.

Risk scoring engine

0-100 weighted score with resource multipliers, environment detection (prod 2x), and action weights (destroy 1.0, modify 0.6).

Blast radius mapping

Dependency graph with BFS traversal showing direct and transitive affected services from each changed resource.

Automated rollback plan

Step-by-step rollback with time estimates and critical path flagging. Complexity score 1-5.

Incident memory

Embedding similarity against past postmortems. 70%+ match triggers warning with historical incident context.

Multi-tool parsing

Unified change schema across 5 tools. Auto-detects tool type. Extracts resource ID, action, fields, and risk weight.

Bring your own LLM

Claude, OpenAI, Ollama, Groq, Azure. Swap with one env var via LiteLLM. Ollama for full air-gap.

Analysis history

Every report in SQLite. Browse, compare, export JSON. Trend data shows deployment safety over time.

REST API + CLI

FastAPI with OpenAPI schema. CI-friendly JSON output. Headless: python -m deploywhisper analyze plan.json

How it works

From raw files to deploy briefing in seconds

1

Drop your deployment artifacts

Drag and drop Terraform plans, K8s manifests, Ansible playbooks, Jenkinsfiles, or CloudFormation templates. Auto-detects each tool and loads the matching AI Skill.

terraform plan -json .yaml manifests playbook.yml Jenkinsfile cfn-template.yaml
Drop files here or click to browse
T main.tf Detected
K deployment.yaml Detected
A playbook.yml Detected
2

AI Skills analyze with deep expertise

Unlike generic LLMs, DeployWhisper injects curated domain knowledge for each tool. The Terraform skill knows apply_immediately triggers a reboot. The K8s skill knows maxUnavailable: 100% means full outage.

All parsing happens locally. Only structured metadata is sent to the LLM — never raw file content.

ANALYSIS ENGINE
Parsing 3 files across 3 tools 0.8s
Scoring risk across 12 changes 0.3s
Mapping blast radius for 8 services 0.5s
Loading skills: Terraform, K8s, Git 0.2s
Generating narrative via Claude streaming...
3

Get your deploy briefing

Risk score, narrative, blast radius, and rollback plan — all on one screen. Verdict above the fold: GO, CAUTION, or NO-GO.

Reports auto-persist to SQLite. Share via link, export JSON, compare with past analyses.

78
HIGH RISK
Deployment risk score
CAUTION
This deployment modifies the primary RDS security group ingress while updating the ECS task definition image. The SG change opens port 5432 to 0.0.0.0/0...
Top risk: SG sg-0a1b2c exposes port 5432 to 0.0.0.0/0
5-step rollback ready 89% incident match 8 services affected

AI Skills engine

Deep tool-specific expertise,
not a generic LLM wrapper

Curated knowledge modules — risk patterns, failure modes, best practices — injected into the LLM context per analysis.

T Terraform
SG 0.0.0.0/0 detection
IAM wildcard policies
RDS without deletion protection
State drift & backend risks
Provider risks (AWS/GCP/Azure)
count/for_each index shifts
K Kubernetes
Missing readiness probes
Privileged containers
RBAC ClusterRole escalation
HPA/VPA conflicts
Rolling update risks
Network policy gaps
A Ansible
Non-idempotent shell modules
Production targeting risks
Missing changed_when guards
Privilege escalation patterns
Variable precedence conflicts
Handler ordering pitfalls
J Jenkins
Removed approval gates
Credential exposure in env vars
Running on controller node
Missing rollback hooks
Shared library version risks
@NonCPS sandbox bypass
C CloudFormation
Replacement-required updates
Missing DeletionPolicy
Cross-stack Fn::ImportValue risks
G Git
Sensitive file auto-blocking
Force-push detection
Commit message risk signals
D Docker
Running as root detection
Unpinned base image tags
docker.sock mount exposure

Skills are markdown files — add your own team-specific knowledge without writing Python.

Use cases

Built for every role on the platform team

P
Platform engineer

Pre-deploy review

Upload your Terraform plan and K8s manifests. Get a risk narrative, blast radius, and rollback plan in 15 seconds. Fix flagged issues, re-analyze, watch the score drop.

S
SRE lead

Go/no-go decision

Review shared report links from teammates. Check blast radius graph, incident match history. Make the call with full context, not gut feel.

J
Junior engineer

Learning from the report

Submit infrastructure PRs with confidence. DeployWhisper explains in plain English what your changes do — learning months of tribal knowledge instantly.

C
CI/CD pipeline

Automated advisory

GitHub Action POSTs changed files to /analyze, gets JSON risk report, adds PR comment. Advisory only — never blocks, humans decide.

Security-first architecture

Your infrastructure code stays under your control

Five non-negotiable hard lines baked into the architecture.

Raw IaC stays local

Parsers extract metadata locally. File content never reaches external APIs.

Memory-only API keys

Credentials in env vars or session memory. Never on disk or in logs.

Sensitive file blocking

.env, keys, kubeconfig auto-detected and excluded from LLM payload.

Full air-gap mode

Ollama local deployment. Zero network egress. Works fully offline.

Advisory, never blocking

Intelligence, not authorization. No mode can prevent deployment.

Self-hosted. Single-team. Your data stays on your infrastructure.

Pure Python. Bring your own LLM.

No JavaScript, no React, no npm. One language, one install, one mental model.

NiceGUI
Dashboard
FastAPI
REST API
LiteLLM
LLM layer
SQLite
Persistence
NetworkX
Blast radius
Plotly
Viz
Pydantic
Validation
Docker
Deploy
Claude (Anthropic) GPT-4o (OpenAI) Llama 3 (Ollama) Mixtral (Groq) Azure OpenAI

Free & open source forever

Deploy with confidence.
Start in five minutes.

Clone the repo, docker compose up, done. No subscription, no vendor lock-in. MIT License.

Star on GitHub Read the docs
$ docker compose up -d
MIT License Self-hosted No telemetry No sign-up No credit card