Skip to main content
For a more detailed PagerDuty integration guide, click here.
1

Deploy a webhook bridge

Create a small service that listens for PagerDuty incident.resolved events and starts a Devin session to write the postmortem. Deploy this as a serverless function (AWS Lambda, Cloudflare Worker) or a lightweight container:
const express = require('express');
const crypto = require('crypto');
const app = express();
app.use(express.json());

function verifySignature(req) {
  const secret = Buffer.from(req.headers['x-webhook-secret'] || '');
  const expected = Buffer.from(process.env.WEBHOOK_SECRET || '');
  if (!expected.length) throw new Error('WEBHOOK_SECRET environment variable is not set');
  if (secret.length !== expected.length) return false;
  return crypto.timingSafeEqual(secret, expected);
}

app.post('/pagerduty-resolved', async (req, res) => {
  if (!verifySignature(req)) return res.status(401).send('Bad signature');

  const event = req.body?.event;
  if (!event || event.event_type !== 'incident.resolved') {
    return res.sendStatus(200);
  }

  const incident = event.data;
  const title = incident.title || 'Unknown incident';
  const service = incident.service?.summary || 'unknown-service';
  const urgency = incident.urgency || 'high';
  const incidentUrl = incident.html_url || '';
  const createdAt = incident.created_at || '';
  const resolvedAt = incident.resolved_at || new Date().toISOString();

  const orgId = process.env.DEVIN_ORG_ID;
  const response = await fetch(
    `https://api.devin.ai/v3/organizations/${orgId}/sessions`, {
    method: 'POST',
    headers: {
      'Authorization': `Bearer ${process.env.DEVIN_API_KEY}`,
      'Content-Type': 'application/json',
    },
    body: JSON.stringify({
      prompt: [
        `A PagerDuty incident has been resolved. Draft a postmortem.`,
        ``,
        `Incident: "${title}"`,
        `Service: ${service}`,
        `Urgency: ${urgency}`,
        `Created: ${createdAt}`,
        `Resolved: ${resolvedAt}`,
        `Incident URL: ${incidentUrl}`,
        ``,
        `Write a structured postmortem:`,
        `1. Use the Datadog MCP to pull logs and metrics for ${service} during the incident window`,
        `2. Identify the root cause — check for deploys, config changes, or upstream failures`,
        `3. Build a detailed timeline from first alert to resolution`,
        `4. List action items to prevent recurrence`,
        `5. Post the postmortem as a PR to our docs/postmortems/ directory`,
      ].join('\n'),
      tags: ['pagerduty-postmortem', `service:${service}`],
    }),
  });

  const { session_id } = await response.json();
  console.log(`Started postmortem session ${session_id} for: ${title}`);
  res.sendStatus(200);
});

app.listen(3000);
Create a service user in Settings > Service Users at app.devin.ai with ManageOrgSessions permission. Copy the API token shown after creation and store it as DEVIN_API_KEY on your bridge service. Set DEVIN_ORG_ID to your organization ID — get it by calling GET https://api.devin.ai/v3/enterprise/organizations with your token. Set WEBHOOK_SECRET to a shared secret you’ll also configure in PagerDuty.
2

Configure PagerDuty

  1. In PagerDuty, go to Services > [your service] > Integrations
  2. Click Add Integration and select Generic Webhooks (v3)
  3. Set the Webhook URL to your bridge endpoint (e.g., https://your-bridge.example.com/pagerduty-resolved)
  4. Under Custom Headers, add X-Webhook-Secret with the same value you stored as WEBHOOK_SECRET
  5. Under Event Subscription, filter by event type incident.resolved so the postmortem triggers only after the incident is closed
You can also subscribe to incident.acknowledged if you want Devin to start gathering data while the incident is still in progress, then finalize the postmortem on resolution.
3

Connect observability MCPs (optional)

Devin writes better postmortems when it can access your telemetry data. Enable one or more MCPs so Devin can pull real data for the incident window:Datadog MCP — Go to Settings > MCP Marketplace, find Datadog, click Enable, and enter your API/Application keys. Devin will query logs, metrics, deploy events, and monitor history.Sentry MCP — Find Sentry in the MCP Marketplace, click Enable, and complete the OAuth flow. Devin will pull error details, stack traces, and release tags.Once connected, Devin automatically correlates telemetry with the incident timeline to build an evidence-backed postmortem. Learn more about connecting MCP servers.
4

What Devin generates

When a PagerDuty incident resolves, Devin analyzes the incident window and drafts a structured postmortem:Example postmortem Devin produces:
# Postmortem: Database Connection Pool Exhaustion — orders-service
**Date:** 2026-02-10 | **Duration:** 46 minutes | **Severity:** P1

## Summary
orders-service experienced connection pool exhaustion between
14:32 and 15:18 UTC, causing 502 errors for ~12% of order
placement requests.

## Timeline
- 14:15 UTC — Deploy #387 released (commit e4f29a1)
- 14:28 UTC — Connection pool usage climbed from 60% to 92%
- 14:32 UTC — Pool exhausted; PagerDuty incident triggered
- 14:38 UTC — On-call engineer acknowledged
- 14:45 UTC — Identified Deploy #387 added a new inventory
              check that opens a DB connection per line item
              without releasing it in the finally block
- 15:02 UTC — Rollback to Deploy #386 initiated
- 15:18 UTC — Connection pool recovered; incident resolved

## Root Cause
Deploy #387 introduced `checkInventoryAvailability()` in
`src/services/orders.ts:142`. The function opens a new DB
connection for each line item in an order but only releases
it on the success path. When inventory checks fail (timeout
or stock unavailable), connections leak.

Orders with 5+ line items reliably exhausted the pool within
15 minutes of the deploy.

## Action Items
- [ ] Fix connection leak: add `finally` block to release
      connection (PR #388 opened)
- [ ] Add connection pool usage monitor with alert at 80%
- [ ] Add integration test for multi-item orders with
      simulated inventory failures
- [ ] Review other DB access patterns for similar leak risks
5

Customize the postmortem

Tailor the pipeline to match your team’s postmortem process:Use a Playbook to define your postmortem template — sections, severity classification, required fields, and where to store the output. Pass a playbook_id in the API request to standardize every postmortem.Route by severity. Add logic in your bridge to only generate postmortems for P1/P2 incidents. Lower-severity incidents might not need a full writeup.Add Knowledge about your architecture, service ownership, and past incidents so Devin can connect the dots — e.g., “orders-service depends on inventory-service, which is known for timeout issues under load.”Post to your wiki. Instead of committing to a repo, have Devin post the postmortem to Confluence, Notion, or your internal wiki via the session prompt.