Skip to main content

Webhook Integrations

Webhook integrations allow you to receive alerts from external monitoring tools directly into Notifer. Instead of manually publishing alerts via HTTP, your monitoring stack pushes events to a Notifer webhook URL, and Notifer automatically creates, updates, and resolves alerts based on the incoming payload.

How It Works

The integration flow is straightforward:

Monitoring Tool           Notifer                      Your Team
| | |
|--- HTTP POST ------->| |
| (webhook payload) | |
| |-- Parse payload |
| |-- Extract alert_key |
| |-- Determine status |
| |-- Create/update alert |
| | |
| |-- Push notification -------->|
| |-- SSE/WebSocket update ----->|
| |-- Dashboard updated -------->|

Each monitoring tool sends webhooks in its own format. Notifer includes built-in parsers for popular tools that automatically extract the alert key, status, severity, labels, and description from the incoming payload. For unsupported tools, the generic webhook integration lets you configure custom field mappings.

Supported Integrations

IntegrationPayload FormatAuto-ResolveSeverity Mapping
AlertmanagerPrometheus Alertmanager JSONYes (via resolved status)critical->P1, warning->P2, info->P4
GrafanaGrafana Alerting JSONYes (via ok/resolved state)Mapped from Grafana severity labels
DatadogDatadog Webhook JSONYes (via Recovered transition)critical->P1, error->P2, warning->P3, info->P4
ZabbixZabbix Webhook JSONYes (via RESOLVED status)disaster->P1, high->P2, average->P3, warning->P4, info->P5
Uptime KumaUptime Kuma JSONYes (via up status)down->P1, degraded->P2
DynatraceDynatrace Problem JSONYes (via RESOLVED state)CRITICAL->P1, ERROR->P2, WARNING->P3
GenericCustom JSONConfigurableConfigurable field mapping

Setting Up Integrations

Step 1: Enable Alert Mode

First, make sure Alert Mode is enabled on your target topic. See Getting Started with Alerts for instructions.

Step 2: Add an Integration

  1. Log in to app.notifer.io
  2. Navigate to your alert-enabled topic
  3. Click Settings (gear icon)
  4. Scroll to the Alert Mode section
  5. Click Add Integration
  6. Select your monitoring tool from the list
  7. Copy the generated webhook URL

The webhook URL has the following format:

https://app.notifer.io/api/topics/{topic}/webhooks/ingest/{type}/{token}

Where:

  • {topic} -- Your topic name
  • {type} -- Integration type (e.g., alertmanager, grafana, datadog, zabbix, uptime-kuma, dynatrace, generic)
  • {token} -- A unique security token generated for this integration
One URL per integration

Each integration gets its own unique URL with a distinct token. You can add multiple integrations to the same topic (e.g., Alertmanager + Uptime Kuma) and each will have its own URL.

Step 3: Configure Your Monitoring Tool

Follow the tool-specific instructions below to point your monitoring tool at the webhook URL.


Alertmanager

Prometheus Alertmanager is the most common alerting component in the Prometheus ecosystem. Notifer parses Alertmanager webhook payloads natively, mapping alert names and labels to alert keys.

Configuration

Add Notifer as a webhook receiver in your alertmanager.yml:

receivers:
- name: 'notifer'
webhook_configs:
- url: 'https://app.notifer.io/api/topics/{topic}/webhooks/ingest/alertmanager/{token}'
send_resolved: true

Then route alerts to the Notifer receiver:

route:
receiver: 'default'
routes:
# Send all critical alerts to Notifer
- match:
severity: critical
receiver: 'notifer'
continue: true

# Send all warning alerts to Notifer
- match:
severity: warning
receiver: 'notifer'
continue: true

# Or send everything to Notifer
- receiver: 'notifer'
continue: true

How Notifer Maps Alertmanager Payloads

Alertmanager FieldNotifer FieldExample
alerts[].labels.alertname + key labelsalert_keyHighCPU-prod-web-01
alerts[].annotations.description or summaryMessage body"CPU usage above 90%"
alerts[].labels.severityPrioritycritical -> P1
alerts[].statusAlert statusfiring -> open, resolved -> resolved
alerts[].labelsTagsAll label key-value pairs

Full Example: Prometheus + Alertmanager + Notifer

Prometheus alerting rule:

# prometheus/alert_rules.yml
groups:
- name: infrastructure
rules:
- alert: HighCPUUsage
expr: 100 - (avg by(instance) (rate(node_cpu_seconds_total{mode="idle"}[5m])) * 100) > 90
for: 5m
labels:
severity: critical
team: infrastructure
annotations:
summary: "High CPU usage on {{ $labels.instance }}"
description: "CPU usage is {{ $value | printf \"%.1f\" }}% on {{ $labels.instance }}"

- alert: DiskSpaceLow
expr: (node_filesystem_avail_bytes / node_filesystem_size_bytes) * 100 < 15
for: 10m
labels:
severity: warning
team: infrastructure
annotations:
summary: "Low disk space on {{ $labels.instance }}"
description: "Disk space is {{ $value | printf \"%.1f\" }}% free on {{ $labels.instance }}:{{ $labels.mountpoint }}"

Alertmanager configuration:

# alertmanager.yml
global:
resolve_timeout: 5m

route:
receiver: 'default'
group_by: ['alertname', 'instance']
group_wait: 30s
group_interval: 5m
repeat_interval: 4h
routes:
- receiver: 'notifer-infra'
match:
team: infrastructure

receivers:
- name: 'default'
webhook_configs: []

- name: 'notifer-infra'
webhook_configs:
- url: 'https://app.notifer.io/api/topics/infra-alerts/webhooks/ingest/alertmanager/wh_abc123def456'
send_resolved: true

When the HighCPUUsage alert fires, Notifer creates an alert with:

  • Alert key: HighCPUUsage-prod-web-01:9100 (alertname + instance)
  • Message: "CPU usage is 94.2% on prod-web-01:9100"
  • Priority: P1 (mapped from severity: critical)
  • Tags: severity:critical, team:infrastructure, instance:prod-web-01:9100

When Alertmanager sends the resolved notification, the alert is automatically resolved in Notifer.

send_resolved is important

Always set send_resolved: true in your Alertmanager webhook config. This allows Notifer to automatically resolve alerts when the condition clears, keeping your dashboard accurate without manual cleanup.


Grafana

Grafana Alerting can send webhook notifications when alert rules fire. Notifer parses Grafana alert payloads and maps them to alerts.

Configuration

  1. In Grafana, go to Alerting -> Contact points
  2. Click Add contact point
  3. Set the name (e.g., "Notifer")
  4. Choose Webhook as the integration type
  5. Paste your Notifer webhook URL:
    https://app.notifer.io/api/topics/{topic}/webhooks/ingest/grafana/{token}
  6. Set HTTP Method to POST
  7. Leave other settings as defaults
  8. Click Save contact point
  9. Go to Notification policies and add a route that uses your Notifer contact point

How Notifer Maps Grafana Payloads

Grafana FieldNotifer FieldExample
alerts[].labels.alertnamealert_keyHighMemoryUsage
alerts[].annotations.descriptionMessage body"Memory at 95%"
alerts[].labels.severity or rule priorityPriorityMapped from labels
statusAlert statusfiring -> open, resolved -> resolved
alerts[].labelsTagsAll labels as tags

Grafana Alert Rule Example

When creating alert rules in Grafana, add a severity label to control the priority mapping in Notifer:

Labels:
severity = critical
service = api-gateway

Annotations:
summary = API Gateway error rate above threshold
description = Error rate is {{ $values.error_rate }}% (threshold: 5%)

Datadog

Datadog supports custom webhook integrations that can forward monitor alerts to external services.

Configuration

  1. In Datadog, go to Integrations -> Integrations tab
  2. Search for Webhooks and click Configure
  3. Click New Webhook
  4. Fill in the details:
    • Name: notifer
    • URL: https://app.notifer.io/api/topics/{topic}/webhooks/ingest/datadog/{token}
    • Payload: Leave as default (Datadog sends its standard JSON payload)
    • Custom Headers: None required
  5. Click Save

Then, add @webhook-notifer to your monitor notification messages to route alerts to Notifer:

{{#is_alert}}
Server {{host.name}} CPU is critically high: {{value}}%
@webhook-notifer
{{/is_alert}}

{{#is_recovery}}
Server {{host.name}} CPU has recovered: {{value}}%
@webhook-notifer
{{/is_recovery}}

How Notifer Maps Datadog Payloads

Datadog FieldNotifer FieldExample
alert_id + hostnamealert_key12345678-web-01
bodyMessage body"CPU is critically high: 95%"
alert_transitionAlert statusTriggered -> open, Recovered -> resolved
priorityPrioritycritical -> P1, error -> P2, warning -> P3
tagsTagsDatadog tags mapped directly

Zabbix

Zabbix supports webhook media types for sending notifications to external systems.

Configuration

  1. In Zabbix, go to Administration -> Media types
  2. Click Create media type
  3. Fill in the details:
    • Name: Notifer
    • Type: Webhook
    • Parameters: Add the following:
      • url: https://app.notifer.io/api/topics/{topic}/webhooks/ingest/zabbix/{token}
      • subject: {ALERT.SUBJECT}
      • message: {ALERT.MESSAGE}
      • event_id: {EVENT.ID}
      • host: {HOST.NAME}
      • severity: {EVENT.SEVERITY}
      • status: {EVENT.STATUS}
      • trigger_name: {TRIGGER.NAME}
  4. In the Script field, add the webhook script that sends an HTTP POST to the URL:
var params = JSON.parse(value);
var req = new HttpRequest();
req.addHeader('Content-Type: application/json');
var payload = JSON.stringify({
event_id: params.event_id,
host: params.host,
severity: params.severity,
status: params.status,
trigger_name: params.trigger_name,
subject: params.subject,
message: params.message
});
var resp = req.post(params.url, payload);
return resp;
  1. Click Add to save
  2. Assign the Notifer media type to users or actions that should forward alerts

How Notifer Maps Zabbix Payloads

Zabbix FieldNotifer FieldExample
trigger_name + hostalert_keyHigh-CPU-load-web-server-01
messageMessage body"CPU load is 12.5 on web-server-01"
severityPriorityDisaster -> P1, High -> P2, Average -> P3
statusAlert statusPROBLEM -> open, RESOLVED -> resolved

Uptime Kuma

Uptime Kuma is a self-hosted monitoring tool that can send webhook notifications when monitors go up or down.

Configuration

  1. In Uptime Kuma, go to Settings -> Notifications
  2. Click Setup Notification
  3. Select Webhook as the notification type
  4. Fill in the details:
    • Friendly Name: Notifer
    • Post URL: https://app.notifer.io/api/topics/{topic}/webhooks/ingest/uptime-kuma/{token}
    • Request Body: Default (JSON)
  5. Click Save
  6. Apply the notification to your monitors

Alternatively, you can configure per-monitor notifications:

  1. Edit a monitor
  2. Scroll to Notifications
  3. Select the Notifer webhook notification
  4. Save

How Notifer Maps Uptime Kuma Payloads

Uptime Kuma FieldNotifer FieldExample
monitor.namealert_keyapi-health-check
msgMessage body"api-health-check is DOWN"
heartbeat.statusAlert status0 (down) -> open, 1 (up) -> resolved
Down statusPriorityDown -> P1, Degraded -> P2
monitor.tagsTagsMonitor tags mapped directly

Dynatrace

Dynatrace can send problem notifications to webhook endpoints.

Configuration

  1. In Dynatrace, go to Settings -> Integration -> Problem notifications
  2. Click Set up notifications -> Custom integration
  3. Fill in the details:
    • Display name: Notifer
    • Webhook URL: https://app.notifer.io/api/topics/{topic}/webhooks/ingest/dynatrace/{token}
    • Accept any SSL certificate: Off (recommended)
  4. Customize the payload or leave defaults
  5. Click Save

How Notifer Maps Dynatrace Payloads

Dynatrace FieldNotifer FieldExample
ProblemID + ProblemTitlealert_keyP-12345-CPU-saturation
ProblemDetailsTextMessage body"CPU saturation detected on host-01"
StateAlert statusOPEN -> open, RESOLVED -> resolved
ProblemSeverityPriorityCRITICAL -> P1, ERROR -> P2, WARNING -> P3
TagsTagsDynatrace entity tags

Generic Webhook

For monitoring tools not listed above, or for custom integrations, use the generic webhook type. The generic integration accepts any JSON payload and lets you configure how fields are mapped to Notifer alert properties.

Configuration

When adding a generic integration, you can configure field mappings in the topic settings:

  1. Click Add Integration -> Generic
  2. Copy the webhook URL
  3. Click Configure Mappings to set up field extraction

Default Behavior

By default, the generic webhook expects a JSON body with the following fields:

{
"alert_key": "my-alert-identifier",
"message": "Description of the alert",
"status": "open",
"priority": 2,
"tags": ["tag1", "tag2"],
"source": "my-monitoring-tool",
"title": "Optional alert title"
}

Custom Field Mappings

If your tool sends a different JSON structure, configure custom field paths using dot notation:

Notifer FieldDefault PathCustom Example
Alert Keyalert_keydata.check.id
Messagemessagedata.event.description
Statusstatusdata.event.state
Priorityprioritydata.severity_level
Tagstagsdata.labels
Sourcesourcedata.monitor_name

For example, if your tool sends:

{
"data": {
"check": {
"id": "http-check-api",
"name": "API Health Check"
},
"event": {
"state": "critical",
"description": "API endpoint returned 503"
},
"labels": ["api", "production"]
}
}

You would configure mappings as:

  • Alert Key Path: data.check.id
  • Message Path: data.event.description
  • Status Path: data.event.state
  • Tags Path: data.labels

Status Value Mapping

For the generic integration, configure which values in your payload correspond to open and resolved states:

StatusDefault ValuesCustom Example
Openopen, firing, triggered, problem, critical, error, warningalert, down, fail
Resolvedresolved, ok, recovery, recovered, upclear, normal, pass

Sending a Generic Webhook

curl -X POST \
"https://app.notifer.io/api/topics/my-alerts/webhooks/ingest/generic/wh_abc123" \
-H "Content-Type: application/json" \
-d '{
"alert_key": "custom-check-001",
"message": "Custom monitoring check failed: connection timeout",
"status": "open",
"priority": 2,
"tags": ["custom", "network"],
"source": "custom-monitor",
"title": "Connection Timeout"
}'
Include X-Alert-Key as a fallback

If your payload does not include an alert_key field (or the configured field path does not resolve), you can include the X-Alert-Key HTTP header as a fallback. The header takes precedence over any field in the body.


Webhook URL Format

All webhook URLs follow this pattern:

https://app.notifer.io/api/topics/{topic}/webhooks/ingest/{type}/{token}
SegmentDescriptionExample
{topic}The topic name where alerts will be createdinfra-alerts
{type}The integration type identifieralertmanager, grafana, datadog, zabbix, uptime-kuma, dynatrace, generic
{token}A unique security token (auto-generated)wh_a1b2c3d4e5f6g7h8

The token authenticates the webhook request. Any POST to this URL with a valid token is accepted without additional authentication. This is standard for webhook integrations where the monitoring tool cannot send custom auth headers.

HTTPS only

Webhook URLs are HTTPS only. HTTP requests are rejected. This ensures the webhook token is never transmitted in plaintext.

Token Rotation

For security, you should periodically rotate your webhook tokens, especially if you suspect a token has been compromised.

Rotating a Token

  1. Go to your topic Settings -> Alert Mode -> Integrations
  2. Find the integration you want to rotate
  3. Click the Rotate Token button (refresh icon)
  4. A new token is generated immediately
  5. Copy the new webhook URL -- the old token stops working immediately
  6. Update the URL in your monitoring tool configuration
Zero-downtime rotation is not supported

When you rotate a token, the old token is invalidated immediately. Plan a brief maintenance window or accept that alerts from the monitoring tool will fail until you update the URL. For most setups, this takes less than a minute.

When to Rotate

  • Regularly: Every 90 days as a best practice
  • After team changes: When someone with access to the token leaves the team
  • After a security incident: If you suspect unauthorized access
  • After accidental exposure: If the URL was committed to a public repository or shared in an insecure channel

Monitoring Webhook Events

Notifer tracks webhook activity for each integration so you can verify that events are being received and processed correctly.

Webhook Statistics

View webhook stats in Settings -> Alert Mode -> Integrations -> click an integration:

MetricDescription
Total receivedTotal number of webhook requests received
AcceptedRequests that were successfully parsed and created/updated alerts
DroppedRequests that were rejected (invalid payload, parsing errors)
Last receivedTimestamp of the most recent webhook request
Last errorDetails of the most recent processing error (if any)

Webhook Event Log

Each integration maintains a log of recent webhook events (last 100 events). This is useful for debugging integration issues:

  1. Go to Settings -> Alert Mode -> Integrations
  2. Click the integration name
  3. Scroll to Recent Events
  4. Each entry shows: timestamp, status (accepted/dropped), alert key extracted, and any error details

Troubleshooting

Webhooks Are Not Being Received

Check the webhook URL:

  • Verify the URL is correct -- copy it again from the Notifer dashboard
  • Ensure the topic name in the URL matches your actual topic
  • Confirm the integration type is correct (e.g., alertmanager not grafana)

Check your monitoring tool:

  • Look for error logs in your monitoring tool indicating failed HTTP requests
  • Verify the tool can reach app.notifer.io on port 443 (HTTPS)
  • Check if a firewall or proxy is blocking outbound requests

Test the webhook manually:

# Test with a simple curl (use your actual URL)
curl -v -X POST \
"https://app.notifer.io/api/topics/my-alerts/webhooks/ingest/generic/wh_your_token" \
-H "Content-Type: application/json" \
-d '{"alert_key": "test", "message": "Webhook test", "status": "open"}'

If you get a 200 OK, the webhook URL is working. If you get 401 or 403, the token may be invalid.

Webhooks Received but Alerts Not Created

Check the event log:

  • Go to the integration's event log and look for "dropped" entries
  • Common reasons: invalid JSON, missing required fields, unparseable payload format

Verify Alert Mode is enabled:

  • Webhook integrations require Alert Mode to be active on the topic
  • Go to topic settings and confirm the Alert Mode toggle is on

Check field mappings (generic integration):

  • If using the generic type, verify your field path mappings are correct
  • Use the event log to see what fields Notifer extracted from the payload

Alerts Created but No Notifications

Check deduplication:

  • If the same alert key was already open, new occurrences are deduplicated (no new notification)
  • This is expected behavior -- check the occurrence counter in the dashboard

Check notification settings:

  • Verify your mobile notification settings (priority threshold, tags filter)
  • An alert mapped to P3 will not trigger a push notification if your threshold is set to P2

Check subscription:

  • Ensure you are subscribed to the topic in the web or mobile app

Wrong Priority or Status Mapping

Check the integration type:

  • Make sure you are using the correct integration type in the URL
  • Using generic for an Alertmanager payload (or vice versa) will result in incorrect field mapping

Check severity labels:

  • For Alertmanager and Grafana, the priority mapping depends on the severity label
  • Ensure your alerting rules include a severity label with values like critical, warning, or info

Real-World Example: Multi-Tool Setup

A common production setup uses multiple monitoring tools feeding into a single Notifer topic:

Prometheus/Alertmanager ──┐

Grafana Alerting ─────────┼──> infra-alerts topic ──> Team notifications

Uptime Kuma ──────────────┤

Custom health checks ─────┘

Each tool gets its own integration and webhook URL:

ToolIntegration TypeWebhook URL
Alertmanageralertmanagerhttps://app.notifer.io/api/topics/infra-alerts/webhooks/ingest/alertmanager/wh_token1
Grafanagrafanahttps://app.notifer.io/api/topics/infra-alerts/webhooks/ingest/grafana/wh_token2
Uptime Kumauptime-kumahttps://app.notifer.io/api/topics/infra-alerts/webhooks/ingest/uptime-kuma/wh_token3
Custom scriptsgenerichttps://app.notifer.io/api/topics/infra-alerts/webhooks/ingest/generic/wh_token4

All alerts appear in a unified dashboard regardless of source. The X-Alert-Source field (automatically populated from the integration type) lets you filter by origin when needed.

Separate topics for separate teams

If different teams handle different alert types, use separate topics with their own integrations. For example: infra-alerts for the platform team, app-alerts for application developers, and security-alerts for the security team. Each team configures their own notification preferences.

Next Steps