Skip to main content

Alert Mode Overview

Alert Mode transforms Notifer topics from simple notification channels into full incident management endpoints. Instead of treating every message as a standalone notification, Alert Mode groups related events by a unique key, tracks their lifecycle, and reduces noise through intelligent deduplication.

What is Alert Mode?

Regular Notifer messages are fire-and-forget: you publish a message, subscribers receive it, and that is the end of the story. Alert Mode adds state to your messages. When you send an alert with a specific key, Notifer tracks whether that alert is open, acknowledged, or resolved. Subsequent messages with the same key update the existing alert rather than creating a new notification.

Regular messages are ideal for one-off notifications like deployment completions, user signups, or daily reports. Alerts are designed for ongoing situations that need to be tracked until resolution, such as server outages, high CPU usage, or failed health checks.

When to use Alert Mode

If your team needs to know "is this issue still happening?" or "has someone looked at this?", you want Alert Mode. If the notification is purely informational with no follow-up needed, regular messages are the better fit.

Alert Lifecycle

Every alert follows a clear three-state lifecycle:

                    New alert received
|
v
+-----------+
| |
| OPEN | <--- Same alert_key received
| | (occurrence count increases)
+-----+-----+
|
Team member acknowledges
|
v
+----------------+
| |
| ACKNOWLEDGED |
| |
+-------+--------+
|
Issue fixed / auto-resolved
|
v
+-----------+
| |
| RESOLVED |
| |
+-----+-----+
|
Same alert_key fires again
|
v
+-----------+
| |
| OPEN | (auto-reopen)
| |
+-----------+

State Descriptions

StateMeaningNotificationsDashboard Badge
OpenAn active issue requiring attentionFull notifications sent (push, SSE, WebSocket)Red indicator
AcknowledgedSomeone is investigating the issueSuppressed for this alert key (until resolved or reopened)Yellow indicator
ResolvedThe issue has been fixedResolution notification sent onceGreen indicator

State Transitions

  • Open -> Acknowledged: A team member clicks "Acknowledge" in the dashboard or calls the acknowledge API endpoint.
  • Open -> Resolved: The alert is resolved manually or via an API call with X-Alert-Status: resolved.
  • Acknowledged -> Resolved: The investigating team member marks the issue as fixed.
  • Acknowledged -> Open (reopen): A new occurrence arrives for the same alert key while acknowledged, re-triggering notifications.
  • Resolved -> Open (reopen): The same alert fires again after being resolved, creating a new incident cycle.

Key Concepts

Alert Key (Deduplication)

The alert_key is the core concept behind Alert Mode. It is a string you define that uniquely identifies a particular alert condition. When multiple messages arrive with the same alert_key, Notifer groups them into a single alert instead of creating separate notifications for each one.

# Both of these update the SAME alert:
curl -d "CPU at 92%" -H "X-Alert-Key: cpu-high" https://app.notifer.io/monitoring
curl -d "CPU at 97%" -H "X-Alert-Key: cpu-high" https://app.notifer.io/monitoring

Choose alert keys that are meaningful and specific. Good examples:

Alert KeyScenario
cpu-high-prod-web-01CPU alert for a specific server
disk-space-/dataDisk space alert for a specific mount
healthcheck-api-gatewayHealth check failure for a service
cert-expiry-example.comSSL certificate expiring for a domain
backup-failed-postgresDatabase backup failure
Naming convention

Use lowercase with hyphens. Include the check type and the resource identifier so your team can tell at a glance what is affected: {check}-{resource}.

Occurrence Counting

Each time a message arrives with the same alert_key while the alert is still open, Notifer increments the occurrence counter instead of sending a new notification. This is what makes Alert Mode powerful for monitoring scenarios where the same condition is detected repeatedly.

For example, a cron job that checks CPU every minute might fire 60 alerts per hour during a spike. Without Alert Mode, your team gets 60 separate notifications. With Alert Mode, they get one alert showing "60 occurrences" alongside the most recent message body.

The occurrence count is visible in the dashboard and included in API responses:

{
"alert_key": "cpu-high-prod-web-01",
"status": "open",
"occurrences": 42,
"first_seen": "2026-02-14T08:15:00Z",
"last_seen": "2026-02-14T08:56:00Z",
"message": "CPU at 98% for 5 minutes"
}

Auto-Reopen

When an alert has been acknowledged or resolved, a new occurrence of the same alert_key will automatically reopen the alert. This ensures that recurring problems are never silently ignored.

The auto-reopen behavior works as follows:

  • Acknowledged alert receives new occurrence: Alert transitions back to Open. The team is notified again because the issue may have worsened or changed.
  • Resolved alert receives new occurrence: Alert transitions back to Open. This indicates the problem has returned and needs fresh attention.

Auto-reopen is always enabled and cannot be disabled. This is by design -- if a condition fires again, your team should know about it regardless of previous acknowledgments.

Use Cases

Infrastructure Monitoring (Prometheus / Grafana)

Connect your Prometheus Alertmanager or Grafana alerting rules to Notifer via webhooks. Alerts from your monitoring stack are automatically mapped to Notifer alerts with proper deduplication and lifecycle tracking.

Prometheus -> Alertmanager -> Notifer webhook -> Alert created
-> Push notification
-> Dashboard updated

Common scenarios:

  • High CPU / memory / disk usage
  • Service health check failures
  • Pod restarts in Kubernetes
  • Database connection pool exhaustion

DevOps and CI/CD

Track deployment failures, build breaks, and infrastructure provisioning issues as alerts rather than one-off messages. When a pipeline fails and then fails again on retry, you see a single alert with multiple occurrences rather than a flood of separate notifications.

# CI pipeline failure alert
curl -d "Pipeline #1234 failed at stage: deploy" \
-H "X-Alert-Key: pipeline-main-deploy" \
-H "X-Priority: 2" \
-H "X-Tags: ci,pipeline,failure" \
https://app.notifer.io/ci-alerts

Incident Management

Use Alert Mode as the foundation of your incident response workflow:

  1. Monitoring detects an issue and creates an alert (Open).
  2. On-call engineer receives a push notification and acknowledges it.
  3. Engineer investigates and fixes the issue.
  4. Engineer resolves the alert (or sends X-Alert-Status: resolved from a recovery script).
  5. The full timeline (occurrences, timestamps, messages) is preserved for post-mortems.

IoT and Device Monitoring

Track sensor alerts from IoT devices. Temperature sensors, moisture detectors, or motion sensors can publish to Notifer with alert keys tied to the device identifier:

curl -d "Temperature 42°C exceeds threshold 38°C" \
-H "X-Alert-Key: temp-sensor-warehouse-b" \
-H "X-Priority: 2" \
-H "X-Tags: iot,temperature,warehouse" \
https://app.notifer.io/iot-alerts

If the sensor continues reporting high temperatures, the occurrence counter increases without flooding your notification channels.

Alert Dashboard

When Alert Mode is enabled on a topic, the dashboard switches from the standard chronological message list to an alert-oriented view. This view is designed for at-a-glance situational awareness.

Dashboard Layout

The alert dashboard organizes alerts into three sections:

Active Alerts (Open)

  • Sorted by priority (P1 first), then by most recent occurrence
  • Each card shows: alert key, latest message, priority badge, occurrence count, first/last seen timestamps, and tags
  • Red status indicator

In Progress (Acknowledged)

  • Shows who acknowledged the alert and when
  • Sorted by acknowledgment time (most recent first)
  • Yellow status indicator

Recently Resolved

  • Alerts resolved in the last 24 hours
  • Shows resolution time and who resolved it
  • Green status indicator
  • Older resolved alerts are accessible via the alert history

Alert Timeline

Click any alert to expand its full timeline. The timeline shows every event in chronological order:

  • Initial alert creation (with the first message body)
  • Each subsequent occurrence (with updated message body)
  • Acknowledgment events (who and when)
  • Resolution events (who, when, and the resolution message)
  • Reopen events (if the alert was reopened after resolution)

This timeline is preserved indefinitely and can be used for post-incident reviews.

The alert dashboard supports filtering by:

  • Status: Show only open, acknowledged, or resolved alerts
  • Priority: Filter by priority range (e.g., P1-P2 only)
  • Tags: Filter by one or more tags
  • Time range: Show alerts from a specific time window
  • Search: Full-text search across alert keys and message bodies

Notification Behavior

Alert Mode changes how notifications are delivered compared to regular messages. Understanding this behavior helps you configure your notification settings effectively.

When Notifications Are Sent

EventPush NotificationSSE / WebSocketDashboard Update
New alert (first occurrence)YesYesYes
Additional occurrence (same key, still open)No (deduplicated)Yes (counter update)Yes
Alert acknowledgedNoYesYes
Alert resolvedYes (resolution notice)YesYes
Alert reopened (new occurrence after resolve)YesYesYes

Deduplication in Detail

The most important behavioral difference is deduplication. When the same alert_key fires multiple times while the alert is still open:

  • Push notifications: Only the first occurrence triggers a push. Subsequent occurrences are silent on mobile.
  • SSE and WebSocket: All occurrences are streamed to connected clients. The web and mobile apps update the occurrence counter and message body in real time.
  • Dashboard: The alert card updates with the latest message body, incremented occurrence count, and updated last_seen timestamp.

This means your monitoring scripts can fire as frequently as needed without worrying about overwhelming your team with notifications. The dashboard always shows the latest state, while push notifications remain manageable.

Resolution notifications

When an alert is resolved (either manually or via X-Alert-Status: resolved), a single push notification is sent to subscribers informing them the issue is cleared. This helps the team know that an active situation has been handled.

How Alerts Differ from Regular Notifications

FeatureRegular MessagesAlert Mode
DeduplicationNone -- every message is separateMessages with same alert_key are grouped
State trackingNo stateOpen, Acknowledged, Resolved
AcknowledgmentNot availableTeam members can acknowledge alerts
Occurrence countingNot availableCounts repeat occurrences
Auto-reopenNot availableResolved alerts reopen on new occurrence
Notification behaviorEvery message triggers a notificationDeduplicated -- only state changes trigger notifications
Dashboard viewChronological message listAlert cards with status, occurrences, and timeline
Webhook integrationsNot availableAlertmanager, Grafana, Datadog, and more
Resolution trackingNot availableTrack when and who resolved each alert

Benefits

Reduce Notification Fatigue

Without deduplication, a flapping service can generate hundreds of notifications per hour. Alert Mode collapses these into a single alert with an occurrence counter, so your team sees one notification instead of being overwhelmed.

Incident Tracking

Every alert maintains a complete timeline: when it first fired, how many times it occurred, when it was acknowledged, and when it was resolved. This history is invaluable for post-incident reviews and SLA reporting.

Integration with Monitoring Tools

Alert Mode speaks the same language as your existing monitoring stack. Webhook integrations with Alertmanager, Grafana, Datadog, Zabbix, and others let you funnel all your alerts into a single dashboard without changing your monitoring configuration significantly.

Clear Ownership

The acknowledgment step makes it visible to the entire team that someone is working on an issue. No more duplicate investigation efforts or "I thought you were handling that" situations.

Automatic Recovery Detection

By sending X-Alert-Status: resolved from recovery scripts or monitoring tool webhooks, alerts are closed automatically when the underlying issue is fixed. No manual cleanup required.

Works with Existing Workflows

Alert Mode does not require you to change how you publish messages. You simply add the X-Alert-Key header to your existing publish calls. If you already have scripts that send notifications via Notifer, enabling Alert Mode and adding a single header gives you deduplication, state tracking, and acknowledgment without any other changes to your workflow.

Start small

You do not need to enable Alert Mode on all your topics at once. Start with one high-volume topic where notification fatigue is a problem (e.g., your infrastructure monitoring topic), and expand from there once your team is comfortable with the alert workflow.

Real-World Example: Full Alert Cycle

Here is a complete example showing how Alert Mode works in practice:

# 1. Monitoring script detects high CPU
curl -d "CPU usage at 92% on prod-web-01" \
-H "X-Alert-Key: cpu-high-prod-web-01" \
-H "X-Priority: 2" \
-H "X-Tags: cpu,production,web" \
https://app.notifer.io/infra-alerts
# -> Alert created (OPEN), push notification sent

# 2. CPU stays high -- same alert fires again 1 minute later
curl -d "CPU usage at 95% on prod-web-01" \
-H "X-Alert-Key: cpu-high-prod-web-01" \
-H "X-Priority: 2" \
https://app.notifer.io/infra-alerts
# -> Occurrence count: 2, NO new notification (deduplicated)

# 3. Engineer acknowledges via dashboard or API
# -> Alert state: ACKNOWLEDGED, team sees yellow indicator

# 4. CPU normalizes -- recovery script sends resolution
curl -d "CPU usage normalized at 45% on prod-web-01" \
-H "X-Alert-Key: cpu-high-prod-web-01" \
-H "X-Alert-Status: resolved" \
https://app.notifer.io/infra-alerts
# -> Alert state: RESOLVED, resolution notification sent

# 5. If CPU spikes again later, the alert auto-reopens

Next Steps