Webhook Integrations
Webhook integrations allow you to receive alerts from external monitoring tools directly into Notifer. Instead of manually publishing alerts via HTTP, your monitoring stack pushes events to a Notifer webhook URL, and Notifer automatically creates, updates, and resolves alerts based on the incoming payload.
How It Works
The integration flow is straightforward:
Monitoring Tool Notifer Your Team
| | |
|--- HTTP POST ------->| |
| (webhook payload) | |
| |-- Parse payload |
| |-- Extract alert_key |
| |-- Determine status |
| |-- Create/update alert |
| | |
| |-- Push notification -------->|
| |-- SSE/WebSocket update ----->|
| |-- Dashboard updated -------->|
Each monitoring tool sends webhooks in its own format. Notifer includes built-in parsers for popular tools that automatically extract the alert key, status, severity, labels, and description from the incoming payload. For unsupported tools, the generic webhook integration lets you configure custom field mappings.
Supported Integrations
| Integration | Payload Format | Auto-Resolve | Severity Mapping |
|---|---|---|---|
| Alertmanager | Prometheus Alertmanager JSON | Yes (via resolved status) | critical->P1, warning->P2, info->P4 |
| Grafana | Grafana Alerting JSON | Yes (via ok/resolved state) | Mapped from Grafana severity labels |
| Datadog | Datadog Webhook JSON | Yes (via Recovered transition) | critical->P1, error->P2, warning->P3, info->P4 |
| Zabbix | Zabbix Webhook JSON | Yes (via RESOLVED status) | disaster->P1, high->P2, average->P3, warning->P4, info->P5 |
| Uptime Kuma | Uptime Kuma JSON | Yes (via up status) | down->P1, degraded->P2 |
| Dynatrace | Dynatrace Problem JSON | Yes (via RESOLVED state) | CRITICAL->P1, ERROR->P2, WARNING->P3 |
| Generic | Custom JSON | Configurable | Configurable field mapping |
Setting Up Integrations
Step 1: Enable Alert Mode
First, make sure Alert Mode is enabled on your target topic. See Getting Started with Alerts for instructions.
Step 2: Add an Integration
- Log in to app.notifer.io
- Navigate to your alert-enabled topic
- Click Settings (gear icon)
- Scroll to the Alert Mode section
- Click Add Integration
- Select your monitoring tool from the list
- Copy the generated webhook URL
The webhook URL has the following format:
https://app.notifer.io/api/topics/{topic}/webhooks/ingest/{type}/{token}
Where:
{topic}-- Your topic name{type}-- Integration type (e.g.,alertmanager,grafana,datadog,zabbix,uptime-kuma,dynatrace,generic){token}-- A unique security token generated for this integration
Each integration gets its own unique URL with a distinct token. You can add multiple integrations to the same topic (e.g., Alertmanager + Uptime Kuma) and each will have its own URL.
Step 3: Configure Your Monitoring Tool
Follow the tool-specific instructions below to point your monitoring tool at the webhook URL.
Alertmanager
Prometheus Alertmanager is the most common alerting component in the Prometheus ecosystem. Notifer parses Alertmanager webhook payloads natively, mapping alert names and labels to alert keys.
Configuration
Add Notifer as a webhook receiver in your alertmanager.yml:
receivers:
- name: 'notifer'
webhook_configs:
- url: 'https://app.notifer.io/api/topics/{topic}/webhooks/ingest/alertmanager/{token}'
send_resolved: true
Then route alerts to the Notifer receiver:
route:
receiver: 'default'
routes:
# Send all critical alerts to Notifer
- match:
severity: critical
receiver: 'notifer'
continue: true
# Send all warning alerts to Notifer
- match:
severity: warning
receiver: 'notifer'
continue: true
# Or send everything to Notifer
- receiver: 'notifer'
continue: true
How Notifer Maps Alertmanager Payloads
| Alertmanager Field | Notifer Field | Example |
|---|---|---|
alerts[].labels.alertname + key labels | alert_key | HighCPU-prod-web-01 |
alerts[].annotations.description or summary | Message body | "CPU usage above 90%" |
alerts[].labels.severity | Priority | critical -> P1 |
alerts[].status | Alert status | firing -> open, resolved -> resolved |
alerts[].labels | Tags | All label key-value pairs |
Full Example: Prometheus + Alertmanager + Notifer
Prometheus alerting rule:
# prometheus/alert_rules.yml
groups:
- name: infrastructure
rules:
- alert: HighCPUUsage
expr: 100 - (avg by(instance) (rate(node_cpu_seconds_total{mode="idle"}[5m])) * 100) > 90
for: 5m
labels:
severity: critical
team: infrastructure
annotations:
summary: "High CPU usage on {{ $labels.instance }}"
description: "CPU usage is {{ $value | printf \"%.1f\" }}% on {{ $labels.instance }}"
- alert: DiskSpaceLow
expr: (node_filesystem_avail_bytes / node_filesystem_size_bytes) * 100 < 15
for: 10m
labels:
severity: warning
team: infrastructure
annotations:
summary: "Low disk space on {{ $labels.instance }}"
description: "Disk space is {{ $value | printf \"%.1f\" }}% free on {{ $labels.instance }}:{{ $labels.mountpoint }}"
Alertmanager configuration:
# alertmanager.yml
global:
resolve_timeout: 5m
route:
receiver: 'default'
group_by: ['alertname', 'instance']
group_wait: 30s
group_interval: 5m
repeat_interval: 4h
routes:
- receiver: 'notifer-infra'
match:
team: infrastructure
receivers:
- name: 'default'
webhook_configs: []
- name: 'notifer-infra'
webhook_configs:
- url: 'https://app.notifer.io/api/topics/infra-alerts/webhooks/ingest/alertmanager/wh_abc123def456'
send_resolved: true
When the HighCPUUsage alert fires, Notifer creates an alert with:
- Alert key:
HighCPUUsage-prod-web-01:9100(alertname + instance) - Message: "CPU usage is 94.2% on prod-web-01:9100"
- Priority: P1 (mapped from
severity: critical) - Tags:
severity:critical,team:infrastructure,instance:prod-web-01:9100
When Alertmanager sends the resolved notification, the alert is automatically resolved in Notifer.
Always set send_resolved: true in your Alertmanager webhook config. This allows Notifer to automatically resolve alerts when the condition clears, keeping your dashboard accurate without manual cleanup.
Grafana
Grafana Alerting can send webhook notifications when alert rules fire. Notifer parses Grafana alert payloads and maps them to alerts.
Configuration
- In Grafana, go to Alerting -> Contact points
- Click Add contact point
- Set the name (e.g., "Notifer")
- Choose Webhook as the integration type
- Paste your Notifer webhook URL:
https://app.notifer.io/api/topics/{topic}/webhooks/ingest/grafana/{token} - Set HTTP Method to POST
- Leave other settings as defaults
- Click Save contact point
- Go to Notification policies and add a route that uses your Notifer contact point
How Notifer Maps Grafana Payloads
| Grafana Field | Notifer Field | Example |
|---|---|---|
alerts[].labels.alertname | alert_key | HighMemoryUsage |
alerts[].annotations.description | Message body | "Memory at 95%" |
alerts[].labels.severity or rule priority | Priority | Mapped from labels |
status | Alert status | firing -> open, resolved -> resolved |
alerts[].labels | Tags | All labels as tags |
Grafana Alert Rule Example
When creating alert rules in Grafana, add a severity label to control the priority mapping in Notifer:
Labels:
severity = critical
service = api-gateway
Annotations:
summary = API Gateway error rate above threshold
description = Error rate is {{ $values.error_rate }}% (threshold: 5%)
Datadog
Datadog supports custom webhook integrations that can forward monitor alerts to external services.
Configuration
- In Datadog, go to Integrations -> Integrations tab
- Search for Webhooks and click Configure
- Click New Webhook
- Fill in the details:
- Name:
notifer - URL:
https://app.notifer.io/api/topics/{topic}/webhooks/ingest/datadog/{token} - Payload: Leave as default (Datadog sends its standard JSON payload)
- Custom Headers: None required
- Name:
- Click Save
Then, add @webhook-notifer to your monitor notification messages to route alerts to Notifer:
{{#is_alert}}
Server {{host.name}} CPU is critically high: {{value}}%
@webhook-notifer
{{/is_alert}}
{{#is_recovery}}
Server {{host.name}} CPU has recovered: {{value}}%
@webhook-notifer
{{/is_recovery}}
How Notifer Maps Datadog Payloads
| Datadog Field | Notifer Field | Example |
|---|---|---|
alert_id + hostname | alert_key | 12345678-web-01 |
body | Message body | "CPU is critically high: 95%" |
alert_transition | Alert status | Triggered -> open, Recovered -> resolved |
priority | Priority | critical -> P1, error -> P2, warning -> P3 |
tags | Tags | Datadog tags mapped directly |
Zabbix
Zabbix supports webhook media types for sending notifications to external systems.
Configuration
- In Zabbix, go to Administration -> Media types
- Click Create media type
- Fill in the details:
- Name:
Notifer - Type: Webhook
- Parameters: Add the following:
url:https://app.notifer.io/api/topics/{topic}/webhooks/ingest/zabbix/{token}subject:{ALERT.SUBJECT}message:{ALERT.MESSAGE}event_id:{EVENT.ID}host:{HOST.NAME}severity:{EVENT.SEVERITY}status:{EVENT.STATUS}trigger_name:{TRIGGER.NAME}
- Name:
- In the Script field, add the webhook script that sends an HTTP POST to the URL:
var params = JSON.parse(value);
var req = new HttpRequest();
req.addHeader('Content-Type: application/json');
var payload = JSON.stringify({
event_id: params.event_id,
host: params.host,
severity: params.severity,
status: params.status,
trigger_name: params.trigger_name,
subject: params.subject,
message: params.message
});
var resp = req.post(params.url, payload);
return resp;
- Click Add to save
- Assign the Notifer media type to users or actions that should forward alerts
How Notifer Maps Zabbix Payloads
| Zabbix Field | Notifer Field | Example |
|---|---|---|
trigger_name + host | alert_key | High-CPU-load-web-server-01 |
message | Message body | "CPU load is 12.5 on web-server-01" |
severity | Priority | Disaster -> P1, High -> P2, Average -> P3 |
status | Alert status | PROBLEM -> open, RESOLVED -> resolved |
Uptime Kuma
Uptime Kuma is a self-hosted monitoring tool that can send webhook notifications when monitors go up or down.
Configuration
- In Uptime Kuma, go to Settings -> Notifications
- Click Setup Notification
- Select Webhook as the notification type
- Fill in the details:
- Friendly Name:
Notifer - Post URL:
https://app.notifer.io/api/topics/{topic}/webhooks/ingest/uptime-kuma/{token} - Request Body: Default (JSON)
- Friendly Name:
- Click Save
- Apply the notification to your monitors
Alternatively, you can configure per-monitor notifications:
- Edit a monitor
- Scroll to Notifications
- Select the Notifer webhook notification
- Save
How Notifer Maps Uptime Kuma Payloads
| Uptime Kuma Field | Notifer Field | Example |
|---|---|---|
monitor.name | alert_key | api-health-check |
msg | Message body | "api-health-check is DOWN" |
heartbeat.status | Alert status | 0 (down) -> open, 1 (up) -> resolved |
| Down status | Priority | Down -> P1, Degraded -> P2 |
monitor.tags | Tags | Monitor tags mapped directly |
Dynatrace
Dynatrace can send problem notifications to webhook endpoints.
Configuration
- In Dynatrace, go to Settings -> Integration -> Problem notifications
- Click Set up notifications -> Custom integration
- Fill in the details:
- Display name:
Notifer - Webhook URL:
https://app.notifer.io/api/topics/{topic}/webhooks/ingest/dynatrace/{token} - Accept any SSL certificate: Off (recommended)
- Display name:
- Customize the payload or leave defaults
- Click Save
How Notifer Maps Dynatrace Payloads
| Dynatrace Field | Notifer Field | Example |
|---|---|---|
ProblemID + ProblemTitle | alert_key | P-12345-CPU-saturation |
ProblemDetailsText | Message body | "CPU saturation detected on host-01" |
State | Alert status | OPEN -> open, RESOLVED -> resolved |
ProblemSeverity | Priority | CRITICAL -> P1, ERROR -> P2, WARNING -> P3 |
Tags | Tags | Dynatrace entity tags |
Generic Webhook
For monitoring tools not listed above, or for custom integrations, use the generic webhook type. The generic integration accepts any JSON payload and lets you configure how fields are mapped to Notifer alert properties.
Configuration
When adding a generic integration, you can configure field mappings in the topic settings:
- Click Add Integration -> Generic
- Copy the webhook URL
- Click Configure Mappings to set up field extraction
Default Behavior
By default, the generic webhook expects a JSON body with the following fields:
{
"alert_key": "my-alert-identifier",
"message": "Description of the alert",
"status": "open",
"priority": 2,
"tags": ["tag1", "tag2"],
"source": "my-monitoring-tool",
"title": "Optional alert title"
}
Custom Field Mappings
If your tool sends a different JSON structure, configure custom field paths using dot notation:
| Notifer Field | Default Path | Custom Example |
|---|---|---|
| Alert Key | alert_key | data.check.id |
| Message | message | data.event.description |
| Status | status | data.event.state |
| Priority | priority | data.severity_level |
| Tags | tags | data.labels |
| Source | source | data.monitor_name |
For example, if your tool sends:
{
"data": {
"check": {
"id": "http-check-api",
"name": "API Health Check"
},
"event": {
"state": "critical",
"description": "API endpoint returned 503"
},
"labels": ["api", "production"]
}
}
You would configure mappings as:
- Alert Key Path:
data.check.id - Message Path:
data.event.description - Status Path:
data.event.state - Tags Path:
data.labels
Status Value Mapping
For the generic integration, configure which values in your payload correspond to open and resolved states:
| Status | Default Values | Custom Example |
|---|---|---|
| Open | open, firing, triggered, problem, critical, error, warning | alert, down, fail |
| Resolved | resolved, ok, recovery, recovered, up | clear, normal, pass |
Sending a Generic Webhook
curl -X POST \
"https://app.notifer.io/api/topics/my-alerts/webhooks/ingest/generic/wh_abc123" \
-H "Content-Type: application/json" \
-d '{
"alert_key": "custom-check-001",
"message": "Custom monitoring check failed: connection timeout",
"status": "open",
"priority": 2,
"tags": ["custom", "network"],
"source": "custom-monitor",
"title": "Connection Timeout"
}'
If your payload does not include an alert_key field (or the configured field path does not resolve), you can include the X-Alert-Key HTTP header as a fallback. The header takes precedence over any field in the body.
Webhook URL Format
All webhook URLs follow this pattern:
https://app.notifer.io/api/topics/{topic}/webhooks/ingest/{type}/{token}
| Segment | Description | Example |
|---|---|---|
{topic} | The topic name where alerts will be created | infra-alerts |
{type} | The integration type identifier | alertmanager, grafana, datadog, zabbix, uptime-kuma, dynatrace, generic |
{token} | A unique security token (auto-generated) | wh_a1b2c3d4e5f6g7h8 |
The token authenticates the webhook request. Any POST to this URL with a valid token is accepted without additional authentication. This is standard for webhook integrations where the monitoring tool cannot send custom auth headers.
Webhook URLs are HTTPS only. HTTP requests are rejected. This ensures the webhook token is never transmitted in plaintext.
Token Rotation
For security, you should periodically rotate your webhook tokens, especially if you suspect a token has been compromised.
Rotating a Token
- Go to your topic Settings -> Alert Mode -> Integrations
- Find the integration you want to rotate
- Click the Rotate Token button (refresh icon)
- A new token is generated immediately
- Copy the new webhook URL -- the old token stops working immediately
- Update the URL in your monitoring tool configuration
When you rotate a token, the old token is invalidated immediately. Plan a brief maintenance window or accept that alerts from the monitoring tool will fail until you update the URL. For most setups, this takes less than a minute.
When to Rotate
- Regularly: Every 90 days as a best practice
- After team changes: When someone with access to the token leaves the team
- After a security incident: If you suspect unauthorized access
- After accidental exposure: If the URL was committed to a public repository or shared in an insecure channel
Monitoring Webhook Events
Notifer tracks webhook activity for each integration so you can verify that events are being received and processed correctly.
Webhook Statistics
View webhook stats in Settings -> Alert Mode -> Integrations -> click an integration:
| Metric | Description |
|---|---|
| Total received | Total number of webhook requests received |
| Accepted | Requests that were successfully parsed and created/updated alerts |
| Dropped | Requests that were rejected (invalid payload, parsing errors) |
| Last received | Timestamp of the most recent webhook request |
| Last error | Details of the most recent processing error (if any) |
Webhook Event Log
Each integration maintains a log of recent webhook events (last 100 events). This is useful for debugging integration issues:
- Go to Settings -> Alert Mode -> Integrations
- Click the integration name
- Scroll to Recent Events
- Each entry shows: timestamp, status (accepted/dropped), alert key extracted, and any error details
Troubleshooting
Webhooks Are Not Being Received
Check the webhook URL:
- Verify the URL is correct -- copy it again from the Notifer dashboard
- Ensure the topic name in the URL matches your actual topic
- Confirm the integration type is correct (e.g.,
alertmanagernotgrafana)
Check your monitoring tool:
- Look for error logs in your monitoring tool indicating failed HTTP requests
- Verify the tool can reach
app.notifer.ioon port 443 (HTTPS) - Check if a firewall or proxy is blocking outbound requests
Test the webhook manually:
# Test with a simple curl (use your actual URL)
curl -v -X POST \
"https://app.notifer.io/api/topics/my-alerts/webhooks/ingest/generic/wh_your_token" \
-H "Content-Type: application/json" \
-d '{"alert_key": "test", "message": "Webhook test", "status": "open"}'
If you get a 200 OK, the webhook URL is working. If you get 401 or 403, the token may be invalid.
Webhooks Received but Alerts Not Created
Check the event log:
- Go to the integration's event log and look for "dropped" entries
- Common reasons: invalid JSON, missing required fields, unparseable payload format
Verify Alert Mode is enabled:
- Webhook integrations require Alert Mode to be active on the topic
- Go to topic settings and confirm the Alert Mode toggle is on
Check field mappings (generic integration):
- If using the generic type, verify your field path mappings are correct
- Use the event log to see what fields Notifer extracted from the payload
Alerts Created but No Notifications
Check deduplication:
- If the same alert key was already open, new occurrences are deduplicated (no new notification)
- This is expected behavior -- check the occurrence counter in the dashboard
Check notification settings:
- Verify your mobile notification settings (priority threshold, tags filter)
- An alert mapped to P3 will not trigger a push notification if your threshold is set to P2
Check subscription:
- Ensure you are subscribed to the topic in the web or mobile app
Wrong Priority or Status Mapping
Check the integration type:
- Make sure you are using the correct integration type in the URL
- Using
genericfor an Alertmanager payload (or vice versa) will result in incorrect field mapping
Check severity labels:
- For Alertmanager and Grafana, the priority mapping depends on the
severitylabel - Ensure your alerting rules include a
severitylabel with values likecritical,warning, orinfo
Real-World Example: Multi-Tool Setup
A common production setup uses multiple monitoring tools feeding into a single Notifer topic:
Prometheus/Alertmanager ──┐
│
Grafana Alerting ─────────┼──> infra-alerts topic ──> Team notifications
│
Uptime Kuma ──────────────┤
│
Custom health checks ─────┘
Each tool gets its own integration and webhook URL:
| Tool | Integration Type | Webhook URL |
|---|---|---|
| Alertmanager | alertmanager | https://app.notifer.io/api/topics/infra-alerts/webhooks/ingest/alertmanager/wh_token1 |
| Grafana | grafana | https://app.notifer.io/api/topics/infra-alerts/webhooks/ingest/grafana/wh_token2 |
| Uptime Kuma | uptime-kuma | https://app.notifer.io/api/topics/infra-alerts/webhooks/ingest/uptime-kuma/wh_token3 |
| Custom scripts | generic | https://app.notifer.io/api/topics/infra-alerts/webhooks/ingest/generic/wh_token4 |
All alerts appear in a unified dashboard regardless of source. The X-Alert-Source field (automatically populated from the integration type) lets you filter by origin when needed.
If different teams handle different alert types, use separate topics with their own integrations. For example: infra-alerts for the platform team, app-alerts for application developers, and security-alerts for the security team. Each team configures their own notification preferences.
Next Steps
- Alert Mode Overview -- Understand the full alert lifecycle and key concepts
- Getting Started with Alerts -- Enable Alert Mode and send manual alerts
- API Reference -- Complete alert and webhook API endpoints
- Priority Levels -- Understand how priority mapping works
- Private Topics -- Secure your alert topics with access control