Skip to content

Mute Specific Errors in Grafana

In some cases, you might want to stop Grafana to send notifications for errors you can not control. For example, if your monitored devices are undergoing maintenance for a longer period of time, your Grafana instance might throw DatasourceError alerts every few minutes.

Long story short, if you are reading this, you know why you want Grafana to shut up. Therefore, let’s dive right in.

DatasourceError is an internal Grafana error type, which is triggered when the data source, for example your Loki, does not send any data to Grafana, or the connection is faulty. This can happen during a hiccup of your server, during an outage, or simply in a maintenance period. Let’s find out how to temporarily disable these errors.

You require admin privileges on the Grafana instance for this, as we need to manually configure something in the admin area.

Create a contact point

A contact point defines an endpoint where a Grafana alert is delivered to, such as an email, a Slack channel, or similar.

Go to Contact Points and add a new one. Call it whatever you like, and choose “Webhook” as integration type. Mine is called Null Contact.

For the URL, use: http://127.0.0.1:9/. Port 9 is the Discard service on Linux, which means that it takes the input and discards it. It actually does not matter what URL we put it, as long as it does not work, but by using the localhost IP and port 9, we signal that we expect no processing of data.
In the notification settings, you can also Disable resolved message, as we won’t use these either.

Grafana screenshot of the "new contact point" dialog, filled out according to the blog post.
The new contact point with all options

Next, we create a new notification policy to use this contact point.

Create a notification policy

Start by creating a new nested notification policy. As the matching label, choose alertname. Match it to the error name you want to mute. I used it for DatasourceError.
The relevant settings here are:

  • Contact point: Select the one we created earlier.
  • Continue matching subsequent sibling nodes should be off.
  • Override grouping
    • depending on your setup, it might be more efficient to let Grafana group all errors based on alertname

Save the policy and we are almost done.

The notification order matters

Now if you simply save the nested policy, it will go to the bottom of your alert list. This is a problem because Grafana evaluates all policies from top to bottom. There is also no way to change the sorting in the UI, and it has been a feature request since 2022.

Therefore, we need to change the ordering. This requires manually editing the JSON configuration, which you can do from the UI. You need an admin account for that.

Go to Alerting > Admin. There you will have the complete configuration as a JSON file. Make a backup of this configuration and save it on your computer, so you can restore it if something happens. If you have many notification policies and alerts, copy the file to a text editor which gives you some syntax highlighting for easier editing.

The JSON format is built like a tree structure (think of a file system with files and folders). Find the key “alertmanager_config” and look at the sub-tree “routes”. This is where all the notification policies should be. Now you need to move the correct one to the top.

The square brackets [] defines an array, a list of items, and each item is enclosed by curly braces {}. Scroll one by one through the list until you find the one with the “receiver”, the contact point, you created before (e.g., Null Contact Point).
Now select the item, cut, and scroll to the top of the list and paste it directly under the “routes” array. It should look something like this below:

Snippet from the Grafana alert configuration in JSON format, showing the newly created route inserted as the first item of the array.
Snippet from the Grafana alert configuration

Note: you need to account for trailing commas, so remove the comma from the last item in the array, and add it when you paste your notification policy. If you somehow mess up the JSON format, Grafana might warn you about it:

Error message caused by a missing comma in the JSON. The error reads: JSON.parse: expected ',' or ']' after array element at line 27 column 9 of the JSON data.
Error warning by Grafana when saving a faulty configuration

Once you are done editing, save the configuration and return to the Notification Policies tab. You should see your Null Contact Point policy at the top, so whenever the alert matches your policy, no notification will be sent.

Last remark: The notification policy should show you a (⚠ error) warning, which is expected behavior, since your destination URL was invalid (and the system will refuse the connection).


Photo by JJ Jordan on Unsplash

Published inTechnology

Be First to Comment

Leave a Reply

Your email address will not be published. Required fields are marked *