Getting Started with Alerts
This guide walks you through enabling Alert Mode on a topic, sending your first alert, and managing alert lifecycle from your dashboard.
Prerequisites
Before you begin, make sure you have:
- A Notifer account (sign up at app.notifer.io)
- At least one topic created (see Creating Topics)
- An API key or authentication token for programmatic access (see API Keys)
If you have not used Notifer before, start with the Quickstart guide to create your account and first topic, then come back here to enable alerts.
Step 1: Enable Alert Mode on a Topic
Alert Mode is enabled per-topic. You can enable it on an existing topic or when creating a new one.
Via the Web App (Recommended)
- Log in to app.notifer.io
- Navigate to the topic you want to use for alerts (or create a new one)
- Click the Settings icon (gear) on the topic page
- Find the Alert Mode section
- Toggle Enable Alert Mode to on
- Click Save
Once enabled, the topic will display an alert badge in the dashboard and switch to the alert-oriented view showing active, acknowledged, and resolved alerts.
Via the Mobile App
- Open the Notifer app on your iOS or Android device
- Navigate to the topic
- Tap the Settings (gear icon) in the top-right corner
- Scroll to Alert Mode and toggle it on
- Tap Save
Via the API
curl -X PATCH https://app.notifer.io/api/topics/my-alerts \
-H "Authorization: Bearer YOUR_TOKEN" \
-H "Content-Type: application/json" \
-d '{"alert_mode": true}'
We recommend creating a dedicated topic for alerts (e.g., infra-alerts, monitoring, prod-incidents) rather than mixing alerts and regular messages on the same topic. This keeps your dashboard organized and makes it easier to configure notification settings.
Step 2: Send Your First Alert
With Alert Mode enabled on your topic, send an alert by including the X-Alert-Key header in your publish request. The alert key is what tells Notifer to treat this message as an alert and enables deduplication.
curl -d "CPU usage above 90% on prod-web-01" \
-H "X-Alert-Key: cpu-high" \
-H "X-Priority: 2" \
-H "X-Tags: server,production" \
https://app.notifer.io/my-alerts
You should receive a response confirming the alert was created:
{
"id": "a1b2c3d4-e5f6-7890-abcd-ef1234567890",
"topic": "my-alerts",
"message": "CPU usage above 90% on prod-web-01",
"priority": 2,
"tags": ["server", "production"],
"alert": {
"alert_key": "cpu-high",
"status": "open",
"occurrences": 1,
"first_seen": "2026-02-14T10:00:00Z",
"last_seen": "2026-02-14T10:00:00Z"
}
}
Now send the same alert key again to see deduplication in action:
curl -d "CPU usage at 95% on prod-web-01 - still elevated" \
-H "X-Alert-Key: cpu-high" \
-H "X-Priority: 2" \
-H "X-Tags: server,production" \
https://app.notifer.io/my-alerts
This time, instead of creating a new alert, the existing one is updated:
{
"id": "f9e8d7c6-b5a4-3210-fedc-ba0987654321",
"topic": "my-alerts",
"message": "CPU usage at 95% on prod-web-01 - still elevated",
"priority": 2,
"tags": ["server", "production"],
"alert": {
"alert_key": "cpu-high",
"status": "open",
"occurrences": 2,
"first_seen": "2026-02-14T10:00:00Z",
"last_seen": "2026-02-14T10:01:00Z"
}
}
Notice the occurrence count increased to 2 and last_seen was updated, but no new push notification was sent. Your team sees one alert, not two.
All publish examples require authentication. Add one of these headers: X-API-Key: noti_your_key (recommended for scripts) or Authorization: Bearer YOUR_TOKEN (for user sessions). See Authentication for details.
Step 3: View Alerts in the Dashboard
Web Dashboard
- Go to app.notifer.io
- Navigate to your alert-enabled topic
- The topic page now shows the Alert View instead of the regular message list
The Alert View displays:
- Active alerts -- open alerts requiring attention (sorted by priority, then by most recent occurrence)
- Acknowledged alerts -- alerts someone is actively working on
- Resolved alerts -- recently resolved alerts (last 24 hours)
Each alert card shows:
- Alert key and latest message
- Priority level with color indicator
- Occurrence count and timestamps (first seen / last seen)
- Current status badge
- Tags
Mobile App
The mobile app shows the same alert-oriented view when a topic has Alert Mode enabled:
- Open the topic in the Notifer mobile app
- Alerts are grouped by status: Active, Acknowledged, Resolved
- Tap an alert to see its full timeline and occurrence history
- Use swipe gestures for quick actions (acknowledge, resolve)
Step 4: Acknowledge and Resolve Alerts
Via the Dashboard (Web and Mobile)
Acknowledge an alert:
- Find the open alert in the dashboard
- Click the Acknowledge button (checkmark icon)
- The alert moves to the "Acknowledged" section with a yellow indicator
- Your team can see who acknowledged it and when
Resolve an alert:
- Find the alert (open or acknowledged)
- Click the Resolve button (check icon)
- The alert moves to the "Resolved" section with a green indicator
- A resolution notification is sent to subscribers
Via the API
Acknowledge an alert:
curl -X POST https://app.notifer.io/api/topics/my-alerts/alerts/cpu-high/acknowledge \
-H "Authorization: Bearer YOUR_TOKEN"
Resolve an alert:
curl -X POST https://app.notifer.io/api/topics/my-alerts/alerts/cpu-high/resolve \
-H "Authorization: Bearer YOUR_TOKEN"
Auto-resolve by publishing with resolved status:
curl -d "CPU usage back to normal at 35%" \
-H "X-API-Key: noti_your_key_here" \
-H "X-Alert-Key: cpu-high" \
-H "X-Alert-Status: resolved" \
https://app.notifer.io/my-alerts
This method is particularly useful for automated recovery scripts that detect when a condition has returned to normal.
Alert Headers Reference
When publishing messages to an alert-enabled topic, the following HTTP headers control alert behavior:
| Header | Required | Values | Description |
|---|---|---|---|
X-Alert-Key | Yes | Any string (max 128 chars) | Unique identifier for deduplication. Messages with the same key are grouped into a single alert. |
X-Alert-Status | No | open, resolved | Set the alert status explicitly. Default is open. Use resolved to auto-resolve an alert. |
X-Alert-Source | No | Any string (max 64 chars) | Identifies the source of the alert (e.g., prometheus, grafana, custom-script). Displayed in the alert timeline. |
These headers work alongside standard Notifer headers:
| Header | Required | Description |
|---|---|---|
X-Priority | No | Priority level 1-5 (1=critical, 5=info). Default: 3. |
X-Title | No | Message title displayed in notifications. |
X-Tags | No | Comma-separated tags for filtering. |
The X-Alert-Key header is required for alert-enabled topics. If you publish to an alert-enabled topic without an X-Alert-Key, the message will be rejected with a 400 Bad Request error. This ensures all messages on alert topics are properly tracked.
Sending Alerts Programmatically
Python
import requests
NOTIFER_URL = "https://app.notifer.io"
API_KEY = "noti_your_api_key_here"
TOPIC = "infra-alerts"
def send_alert(alert_key: str, message: str, priority: int = 3,
tags: list[str] = None, status: str = "open",
source: str = None):
"""Send an alert to Notifer."""
headers = {
"X-API-Key": API_KEY,
"X-Alert-Key": alert_key,
"X-Alert-Status": status,
"X-Priority": str(priority),
}
if tags:
headers["X-Tags"] = ",".join(tags)
if source:
headers["X-Alert-Source"] = source
response = requests.post(
f"{NOTIFER_URL}/{TOPIC}",
data=message,
headers=headers,
)
response.raise_for_status()
return response.json()
# Send an alert
send_alert(
alert_key="disk-space-low",
message="Disk usage at 92% on /data volume",
priority=2,
tags=["disk", "storage", "production"],
source="disk-monitor",
)
# Resolve an alert when condition clears
send_alert(
alert_key="disk-space-low",
message="Disk usage back to 60% after cleanup",
status="resolved",
source="disk-monitor",
)
JavaScript / Node.js
const NOTIFER_URL = "https://app.notifer.io";
const API_KEY = "noti_your_api_key_here";
const TOPIC = "infra-alerts";
async function sendAlert({
alertKey,
message,
priority = 3,
tags = [],
status = "open",
source = null,
}) {
const headers = {
"X-API-Key": API_KEY,
"X-Alert-Key": alertKey,
"X-Alert-Status": status,
"X-Priority": String(priority),
};
if (tags.length > 0) {
headers["X-Tags"] = tags.join(",");
}
if (source) {
headers["X-Alert-Source"] = source;
}
const response = await fetch(`${NOTIFER_URL}/${TOPIC}`, {
method: "POST",
headers,
body: message,
});
if (!response.ok) {
throw new Error(`Alert failed: ${response.status} ${response.statusText}`);
}
return response.json();
}
// Send an alert
await sendAlert({
alertKey: "api-latency-high",
message: "API p95 latency at 2.3s (threshold: 500ms)",
priority: 2,
tags: ["api", "latency", "performance"],
source: "latency-monitor",
});
// Auto-resolve when latency returns to normal
await sendAlert({
alertKey: "api-latency-high",
message: "API p95 latency back to 120ms",
status: "resolved",
source: "latency-monitor",
});
Auto-Resolve Example
A common pattern is to have your monitoring script send a resolution message when the condition clears. This keeps your alert dashboard clean without manual intervention.
#!/bin/bash
# check_disk.sh - Run via cron every 5 minutes
THRESHOLD=85
USAGE=$(df -h /data | awk 'NR==2 {print $5}' | sed 's/%//')
TOPIC="infra-alerts"
API_KEY="noti_your_api_key_here"
if [ "$USAGE" -gt "$THRESHOLD" ]; then
# Condition is bad -- send or update alert
curl -s -d "Disk usage at ${USAGE}% on /data (threshold: ${THRESHOLD}%)" \
-H "X-API-Key: $API_KEY" \
-H "X-Alert-Key: disk-space-data" \
-H "X-Alert-Status: open" \
-H "X-Alert-Source: check_disk" \
-H "X-Priority: 2" \
-H "X-Tags: disk,storage" \
"https://app.notifer.io/$TOPIC"
else
# Condition is OK -- resolve any existing alert
curl -s -d "Disk usage at ${USAGE}% on /data - back to normal" \
-H "X-API-Key: $API_KEY" \
-H "X-Alert-Key: disk-space-data" \
-H "X-Alert-Status: resolved" \
-H "X-Alert-Source: check_disk" \
-H "X-Priority: 4" \
-H "X-Tags: disk,storage" \
"https://app.notifer.io/$TOPIC"
fi
Sending X-Alert-Status: resolved for an alert that is already resolved (or does not exist) is a no-op. You do not need to track state in your monitoring scripts -- just send the appropriate status on every check.
Best Practices
1. Use Meaningful Alert Keys
Your alert key should identify both what is wrong and where. This allows you to have multiple distinct alerts for the same type of issue on different resources.
Good alert keys:
cpu-high-prod-web-01 # CPU issue on a specific server
disk-space-/var/log # Disk issue on a specific mount point
healthcheck-payment-service # Health check for a specific service
ssl-expiry-api.example.com # SSL cert for a specific domain
Bad alert keys:
alert # Too generic -- everything deduplicates together
error # Not specific enough
1 # Meaningless
2. Set Appropriate Priority Levels
Alert priority should reflect the urgency of the response needed:
| Priority | When to Use | Example |
|---|---|---|
| P1 (Critical) | Production down, data loss risk | Database unreachable, payment system failure |
| P2 (High) | Degraded service, needs prompt action | High CPU/memory, failed backups, high error rate |
| P3 (Medium) | Needs attention, not urgent | Certificate expiring in 7 days, disk at 70% |
| P4 (Low) | Informational alert | Scheduled maintenance reminder, minor config drift |
| P5 (Info) | For awareness only | Successful recovery, periodic health report |
3. Design for Deduplication
Think about the lifecycle of your alert conditions:
- Threshold alerts (CPU > 90%): Use a stable key like
cpu-high-{server}. Each check that exceeds the threshold adds an occurrence. Send a resolution when the value drops below threshold. - Binary alerts (service up/down): Use a key like
healthcheck-{service}. Send open on failure, resolved on recovery. - Event alerts (pipeline failure): Use a key that includes the pipeline identifier like
pipeline-{name}-{branch}. Resolve on successful run.
4. Always Include a Source
The X-Alert-Source header helps your team trace where an alert came from, especially when multiple monitoring systems feed into the same topic:
-H "X-Alert-Source: prometheus" # From Prometheus/Alertmanager
-H "X-Alert-Source: grafana" # From Grafana alerts
-H "X-Alert-Source: check_disk" # From a custom script
-H "X-Alert-Source: ci-pipeline" # From CI/CD
5. Combine Tags for Filtering
Use consistent tags across your alerts so team members can filter effectively:
# Environment tags
-H "X-Tags: production"
-H "X-Tags: staging"
# Category tags
-H "X-Tags: cpu,server,infrastructure"
-H "X-Tags: api,latency,performance"
-H "X-Tags: database,connection,backend"
Team members can then configure mobile notification filters to only receive push notifications for specific tag combinations (e.g., only production + priority P1-P2).
Real-World Example: Complete Monitoring Setup
Here is a complete setup for monitoring a web application with alerts:
#!/bin/bash
# monitor.sh - Comprehensive monitoring script
# Run via cron: */2 * * * * /opt/scripts/monitor.sh
API_KEY="noti_your_api_key_here"
TOPIC="prod-alerts"
BASE_URL="https://app.notifer.io/$TOPIC"
SOURCE="monitor-script"
# Function to send alert
send_alert() {
local key="$1" msg="$2" priority="$3" tags="$4" status="${5:-open}"
curl -s -o /dev/null \
-d "$msg" \
-H "X-API-Key: $API_KEY" \
-H "X-Alert-Key: $key" \
-H "X-Alert-Status: $status" \
-H "X-Alert-Source: $SOURCE" \
-H "X-Priority: $priority" \
-H "X-Tags: $tags" \
"$BASE_URL"
}
# Check 1: HTTP health check
if ! curl -sf https://myapp.com/health > /dev/null 2>&1; then
send_alert "healthcheck-myapp" "Health check failed for myapp.com" "1" "health,critical"
else
send_alert "healthcheck-myapp" "Health check OK for myapp.com" "5" "health" "resolved"
fi
# Check 2: Disk space
DISK_USAGE=$(df -h / | awk 'NR==2 {print $5}' | sed 's/%//')
if [ "$DISK_USAGE" -gt 85 ]; then
send_alert "disk-root" "Root disk at ${DISK_USAGE}%" "2" "disk,storage"
else
send_alert "disk-root" "Root disk at ${DISK_USAGE}% - OK" "5" "disk,storage" "resolved"
fi
# Check 3: Memory usage
MEM_USAGE=$(free | awk '/^Mem:/ {printf("%.0f", $3/$2 * 100)}')
if [ "$MEM_USAGE" -gt 90 ]; then
send_alert "memory-high" "Memory at ${MEM_USAGE}%" "2" "memory,server"
else
send_alert "memory-high" "Memory at ${MEM_USAGE}% - OK" "5" "memory,server" "resolved"
fi
Next Steps
- Alert Mode Overview -- Understand the full alert lifecycle and concepts
- Webhook Integrations -- Connect Alertmanager, Grafana, Datadog, and more
- API Reference -- Complete alert API endpoints
- Priority Levels -- Configure priority thresholds for alert notifications
- API Keys -- Set up authentication for your monitoring scripts