Skip to main content

Filter Rules

Filter rules let you process and filter incoming webhook events before they create alerts. Instead of receiving every single event from your monitoring tools, you define rules that decide which events become alerts, which get dropped, and which get modified on the way in.

Why Use Filter Rules?

Monitoring tools are noisy. A typical Prometheus + Alertmanager setup can generate hundreds of events per hour, and not all of them deserve your attention. Filter rules solve this by giving you fine-grained control over what reaches your alert feed:

  • Drop noisy alerts -- Discard low-severity or informational events that clutter your dashboard
  • Modify priority -- Automatically escalate critical alerts to P1 so they trigger immediate push notifications
  • Add tags -- Enrich alerts with environment, team, or service tags for better organization
  • Route events -- Accept only the events that matter for a specific topic
Start Simple

You do not need filter rules to get started with webhooks. By default, all incoming events are accepted and converted to alerts. Add rules only when you need to reduce noise or customize behavior.

Rule Evaluation Order

Rules are processed top to bottom in the order you define. The first matching rule wins -- once an event matches a rule, that rule's action is applied and no further rules are evaluated.

If no rule matches, the event is accepted by default (an alert is created with the original payload).

Incoming Event
|
v
[ Rule 1 ] -- match? --> Apply action (stop)
|
no
v
[ Rule 2 ] -- match? --> Apply action (stop)
|
no
v
[ Rule 3 ] -- match? --> Apply action (stop)
|
no
v
[ Default: Accept ] --> Create alert as-is
First Match Wins

This is important to understand. If you have a broad "drop all info-level" rule above a specific "accept info from production" rule, the info events from production will be dropped before reaching the second rule. Order matters.

Creating Rules

Filter rules are configured per webhook integration through the web app:

  1. Navigate to your topic in the dashboard
  2. Open Settings > Webhook Integrations
  3. Click on the webhook integration you want to configure
  4. Scroll to the Filter Rules section
  5. Click Add Rule

You can also manage rules programmatically via the Alert API.

Rule Anatomy

Every filter rule consists of a name, one or more conditions, and an action.

Name

A descriptive identifier for the rule. Choose something that makes the rule's purpose clear at a glance.

Good examples:

  • "Drop info-level alerts"
  • "Escalate critical to P1"
  • "Tag production events"
  • "Drop test/synthetic alerts"

Conditions

Conditions define when a rule matches an incoming event. Each condition evaluates a field in the webhook payload.

Field Path

The field path specifies which part of the incoming JSON payload to evaluate. Dot notation is supported for nested fields.

Field PathWhat It Matches
severityTop-level severity field
labels.severityNested severity inside labels object
labels.envEnvironment label
alertnameAlert name (common in Alertmanager payloads)
statusAlert status (firing, resolved)
labels.teamTeam label
annotations.summaryAlert summary annotation
Payload Structure Varies

The available field paths depend on the webhook type. An Alertmanager payload has different fields than a Grafana or Datadog payload. Use the Test Rules feature to inspect your actual payload structure.

Operators

OperatorDescriptionExample
equalsExact match (case-sensitive)severity equals critical
not_equalsDoes not matchstatus not_equals resolved
containsField contains substringalertname contains cpu
not_containsField does not contain substringalertname not_contains test
regexRegular expression matchalertname regex ^disk.*full$
existsField is present in payloadlabels.env exists
not_existsField is not present in payloadlabels.team not_exists

Multiple Conditions (AND Logic)

When a rule has multiple conditions, all conditions must match for the rule to trigger. This is AND logic.

{
"conditions": [
{ "field": "severity", "operator": "equals", "value": "critical" },
{ "field": "labels.env", "operator": "equals", "value": "production" }
]
}

This rule only matches events where severity is "critical" AND the environment is "production".

OR Logic

If you need OR logic (match if any condition is true), create separate rules for each condition. Since rules are evaluated independently in order, multiple rules effectively give you OR behavior.

Actions

The action determines what happens when all conditions match.

ActionBehavior
acceptCreate an alert from the event (no modifications)
dropDiscard the event silently -- no alert is created
modifyChange the alert's priority and/or tags, then accept it

Modifications (Modify Action Only)

When the action is modify, you can apply one or both of these changes before the alert is created:

  • Set Priority -- Override the alert priority (1-5). For example, force all matching events to P1.
  • Add Tags -- Append additional tags to the alert. Existing tags from the payload are preserved.
{
"action": "modify",
"modifications": {
"priority": 1,
"add_tags": ["production", "escalated"]
}
}

Examples

1. Drop Low-Severity Alerts

Discard informational alerts that do not require attention.

Rule configuration:

FieldValue
NameDrop info-level alerts
Conditionseverity equals info
Actiondrop
{
"name": "Drop info-level alerts",
"conditions": [
{ "field": "severity", "operator": "equals", "value": "info" }
],
"action": "drop"
}

Result: Any webhook event with "severity": "info" is silently discarded.

2. Escalate Critical Alerts to P1

Ensure critical alerts always get the highest priority, triggering immediate push notifications.

Rule configuration:

FieldValue
NameEscalate critical to P1
Conditionseverity equals critical
Actionmodify
Set Priority1
{
"name": "Escalate critical to P1",
"conditions": [
{ "field": "severity", "operator": "equals", "value": "critical" }
],
"action": "modify",
"modifications": {
"priority": 1
}
}

Result: Critical events become P1 alerts, which push to mobile devices with urgent notification sounds.

3. Add Environment Tags

Automatically tag alerts from production with the production tag for easy filtering.

Rule configuration:

FieldValue
NameTag production alerts
Conditionlabels.env equals production
Actionmodify
Add Tagsproduction
{
"name": "Tag production alerts",
"conditions": [
{ "field": "labels.env", "operator": "equals", "value": "production" }
],
"action": "modify",
"modifications": {
"add_tags": ["production"]
}
}

Result: Production alerts get a production tag appended, making them easy to filter in the dashboard and mobile app.

4. Drop Test and Synthetic Alerts

Prevent test or synthetic monitoring alerts from creating noise in your alert feed.

Rule configuration:

FieldValue
NameDrop test alerts
Conditionalertname contains test
Actiondrop
{
"name": "Drop test alerts",
"conditions": [
{ "field": "alertname", "operator": "contains", "value": "test" }
],
"action": "drop"
}

Result: Any event whose alertname contains "test" (e.g., TestAlert, synthetic-test-ping) is discarded.

5. Combined: Escalate Production Critical, Drop Staging Info

A realistic multi-rule setup for a shared monitoring topic:

Rule 1 -- Drop staging info:

{
"name": "Drop staging info",
"conditions": [
{ "field": "labels.env", "operator": "equals", "value": "staging" },
{ "field": "severity", "operator": "equals", "value": "info" }
],
"action": "drop"
}

Rule 2 -- Escalate production critical:

{
"name": "Escalate production critical",
"conditions": [
{ "field": "labels.env", "operator": "equals", "value": "production" },
{ "field": "severity", "operator": "equals", "value": "critical" }
],
"action": "modify",
"modifications": {
"priority": 1,
"add_tags": ["production", "critical"]
}
}

Rule 3 -- Tag all production events:

{
"name": "Tag production",
"conditions": [
{ "field": "labels.env", "operator": "equals", "value": "production" }
],
"action": "modify",
"modifications": {
"add_tags": ["production"]
}
}

With these three rules in order, staging info is dropped first, then production criticals are escalated, and remaining production events get tagged.

Testing Rules with Sample Payloads

Before deploying rules to production, test them against sample payloads to verify they behave as expected.

Using the Web App

  1. Open your webhook integration settings
  2. Scroll to Filter Rules
  3. Click Test Rules
  4. Paste a sample JSON payload into the editor
  5. Click Run Test

The test result shows:

  • Which rule matched (or "No match -- default accept")
  • What action would be taken
  • The resulting alert preview (with any modifications applied)

Using the API

curl -X POST https://app.notifer.io/api/topics/my-alerts/webhooks/{webhook_id}/rules/test \
-H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/json" \
-d '{
"payload": {
"alertname": "HighCPU",
"severity": "critical",
"labels": {
"env": "production",
"instance": "web-01"
},
"annotations": {
"summary": "CPU usage above 95% for 5 minutes"
}
}
}'

Response:

{
"matched_rule": {
"id": "uuid",
"name": "Escalate critical to P1",
"order": 2
},
"action": "modify",
"result": {
"would_create_alert": true,
"priority": 1,
"tags": ["production", "critical"],
"title": "HighCPU",
"message": "CPU usage above 95% for 5 minutes"
}
}
Test Regularly

Whenever you add or reorder rules, run tests with representative payloads from each of your monitoring tools. This catches ordering mistakes before they affect production alerting.

Rule Ordering and Reordering

Since the first matching rule wins, the order of your rules is critical.

Reordering in the Web App

  • Drag and drop: Click and hold the grip handle on the left side of a rule, then drag it to a new position
  • Up/Down buttons: Use the arrow buttons on each rule to move it one position up or down

Reordering via API

curl -X PUT https://app.notifer.io/api/topics/my-alerts/webhooks/{webhook_id}/rules/reorder \
-H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/json" \
-d '{
"rule_ids": [
"rule-uuid-3",
"rule-uuid-1",
"rule-uuid-2"
]
}'

The rule_ids array defines the new order. The first ID in the array becomes order position 1.

Best Practices

Put Drop Rules First

Drop rules at the top of the list are the most efficient pattern. Events that should be discarded are caught early, before being evaluated against more specific rules.

1. Drop test alerts          <-- broad filter, catches noise early
2. Drop info-level <-- another broad filter
3. Escalate critical to P1 <-- specific modification
4. Tag production events <-- broad modification (catch-all)

Use Specific Conditions

Prefer specific field paths and exact matches over broad patterns:

# Specific and predictable
severity equals critical

# Too broad -- might match unintended events
annotations.summary contains error

Test Before Deploying

Always test new rules with sample payloads that represent your real monitoring data. Pay special attention to:

  • Events that should be dropped -- verify they actually match a drop rule
  • Events that should be escalated -- confirm the priority is set correctly
  • Edge cases -- events that might match multiple rules

Monitor Match Counts

Each rule tracks how many times it has matched. Review these counts periodically:

  • A rule with zero matches may have conditions that are too specific or never occur
  • A rule with unexpectedly high matches may be too broad and catching events you intended to pass through
  • A sudden change in match counts can indicate a change in your monitoring tool's payload format

Keep Rules Manageable

If you find yourself with more than 10 rules on a single webhook, consider:

  • Splitting alerts across multiple topics with separate webhooks
  • Adjusting your monitoring tool's alerting rules to reduce noise at the source
  • Using more specific conditions to consolidate related rules

Next Steps