INCIDENT RESOLVED · 47 MIN AGO

Turn incidents
into answers.

Faultline connects to your incident tools and automatically generates a complete postmortem the moment an incident resolves. No more staring at a blank doc under pressure.

Get early access → See how it works

Built for SRE & DevOps teams · Integrates with PagerDuty, Slack & GitHub

// how it works

Incident closes.
Postmortem appears.

Faultline watches your incident lifecycle and stitches together everything automatically.

🔥
14:03:22 UTC
Incident triggered in PagerDuty
P1 — API latency spike >4s, error rate 18%. On-call engineer paged.
💬
14:05 – 14:47 UTC
Faultline watches your Slack incident channel
Captures the full thread, key decisions, hypotheses, and who said what.
🔀
14:47 UTC
Contributing PR & deploy identified via GitHub
Surfaces the exact commit deployed 8 minutes before the incident began.
14:50 UTC
Incident resolved
PagerDuty webhook fires. Faultline begins generating.
📄
14:51 UTC
Complete postmortem drafted & posted to Slack
Full doc in Google Docs or Notion — timeline, root cause, action items. Ready to review, not to write.
postmortem-2024-11-14-api-latency.gdoc AI GENERATED · NEEDS REVIEW
Postmortem: API Latency Spike — Nov 14, 2024
SEVERITY: P1  ·  DURATION: 44 MIN  ·  GENERATED BY FAULTLINE IN 58s
Summary

A P1 incident caused elevated API latency (>4s p99) and an 18% error rate affecting all authenticated endpoints from 14:03–14:47 UTC. The contributing change was PR #2847 — a connection pool config update deployed at 13:55 UTC that exhausted available DB connections under load.

Root Cause

config/db.py was updated to set max_connections=10 (down from 50) as part of a cost-reduction effort. Under peak traffic, the pool was exhausted within minutes, causing requests to queue and time out.

Action Items
  • Add automated alert when connection pool utilization exceeds 70% · @maya
  • Add db config changes to deployment checklist for load review · @platform-team
  • Write runbook for DB connection pool exhaustion · @dev
// features

Everything you need.
Nothing you don't.

Built for teams that are tired of manual overhead after already surviving an incident.

Auto-triggered on resolve
PagerDuty or OpsGenie webhook fires the moment your incident closes. No manual steps.
🔀
Causal PR detection
Automatically finds the deploy and PR that preceded the incident in your GitHub history.
💬
Slack thread synthesis
Pulls and summarizes your incident channel — who said what, key hypotheses, turning points.
📄
Google Docs & Notion output
Finished postmortem lands in the tool your team already uses. Formatted, linked, and ready to review.
📊
Incident pattern detection
"This is your 3rd DB timeout this quarter." Surface recurring themes before they become a crisis.
🎯
Action item assignment
Auto-creates follow-up tickets in Linear or Jira with owners inferred from the Slack thread.
// integrations

Plugs into your existing stack

No new tools for your team to learn. Faultline works quietly in the background.

🔔 PagerDuty
📟 OpsGenie
💬 Slack
🐙 GitHub
📄 Google Docs
📝 Notion
🎯 Linear
📋 Jira
// pricing

Simple team pricing

One price per team. Unlimited postmortems.

Starter
$29
per team / month
  • Up to 5 team members
  • Unlimited postmortems
  • PagerDuty + Slack + GitHub
  • Google Docs or Notion output
Get started
Enterprise
Custom
talk to us
  • Unlimited members
  • SSO & advanced permissions
  • Custom postmortem templates
  • SLA + dedicated support
Contact us