Skip to main content
When Datadog fires an alert into Slack, Devin jumps on it immediately. This template listens for alert messages from the Datadog app, uses the Datadog MCP to pull the underlying metrics, log lines, and distributed traces, then posts a root-cause analysis back in-thread before a human has finished reading the alert.

Use this template

Open Datadog Alert Investigation in Devin and create the automation with the default configuration. You can customize it before saving.
Looking for a hands-on walkthrough? See the step-by-step tutorial for Datadog Alert Investigation.

What this automation does

The automation pattern here is alert-to-investigation in seconds. Rather than paging a human every time a threshold is breached, you let Devin do the first 15 minutes of work — enumerate recent deploys, correlate metrics, pull suspicious log lines — so the human who eventually opens Slack starts at the “what do we do next” stage instead of the “what broke” stage.

How it works

Trigger: Slack eventmessage
  • Event: slack:message
    • Conditions:
      • channel eq #alerts
What Devin does: Starts a session with full event context, executes the prompt below, and (optionally) notifies you on failure.

Prerequisites

Example prompt

The template ships with this prompt. You can edit it after clicking Use template, or leave it as-is.

Setting it up

  1. Open Automations → Templates in Devin.
  2. Click Datadog Alert Investigation. The create page opens with this template pre-filled.
  3. Connect any required integrations and install MCP servers if you haven’t already.
  4. Replace any placeholder values in the trigger conditions (for example, swap your-org/your-repo for your actual repo).
  5. Review the prompt and adjust it for your team’s language, conventions, and guardrails.
  6. Click Create automation.
Most automation templates include suggested ACU and invocation limits to bound cost during early rollout. Keep them as-is until you’re confident in the automation’s behavior, then raise them to fit your workload.

When to use this template

  • Noisy alert channels where most pages turn out to be known flakes
  • Monitor-heavy SRE orgs that can’t afford a human first-responder on every alert
  • Post-deploy regression alerts that correlate to specific PRs
  • Paging-fatigue mitigation for on-call rotations

Customization ideas

  • Filter to specific monitor names, tags, or severity levels
  • Route different monitors to different playbooks
  • Add the Sentry MCP to cross-reference exceptions
  • Escalate to SRE Incident Response for the highest-severity pages

See also