Outage Communication Template - Professional email and status page templates for when your site is down
Emergency Resource Center

System Down?
Copy These Updates.

Don't panic. Communication is half the battle. Copy these battle-tested incident templates instantly. Used by 500+ SRE teams.

Phase 1: Awareness (0-15 mins)

🚨 Initial Acknowledgment

Send this immediately to acknowledge user reports. Stop the influx of support tickets.

status-update.txt
Subject: Investigating availability issues with [Service]

We are currently investigating reports of [Service] being unavailable. 
Our engineering team is actively identifying the root cause.

We apologize for the disruption and will provide an update in 30 minutes.

Current Status: Investigating 🔍
Next Update: [Time]
Phase 2: The Wait (Every 30-60 mins)

🛠️ The "Still Working" Update

Even if nothing changed, you MUST communicate. Silence causes panic/churn.

investigation-log.txt
Update on [Service] degradation

We have identified the issue affecting [Service] related to [High-Level Cause, e.g., database connectivity]. 
A fix is currently being implemented and verified by our team.

We expect full restoration within [Time Estimate] (or "We are continuing to work towards a resolution").

Next Update: [Time]
Phase 3: Resolution & Trust

✅ Issue Resolved

Total recovery. This is where you promise the Postmortem to rebuild trust.

resolution.txt
Resolved: [Service] is back online

The issue affecting [Service] has been resolved. All systems are operational.

We will be publishing a full incident postmortem shortly detailing exactly what happened and how we'll prevent it next time.

Thank you for your patience.

You promised a Postmortem.
Let AI write it.

Transform your incident chaos into a board-ready report in 2 minutes. No more hours of manual writing.

  • Auto-timeline from Slack logs
  • Root Cause Analysis (5 Whys)
  • Executive Summary for leadership
  • Export to Markdown, PDF, HN post
Generate Board-Ready Report
incident-report.md
## Executive Summary On Jan 12, API latency spiked... ## Timeline - 14:32 UTC: Alert triggered ## Root Cause Database pool exhausted...

Communication Best Practices

Communicate Early

It's better to say "We are looking into it" immediately than to stay silent for an hour while you find the fix. Silence destroys trust.

Be Honest, Not Technical

Customers care about impact ("Can I checkout?"), not the technical details ("Redis shard 4 is locked"). Save the technical details for the postmortem.

Follow Up with a Postmortem

After the fire is out, you MUST explain what happened. A professional postmortem rebuilds the trust you lost during the outage.