How to Filter Out Noisy Logs Using Adaptive Logs Drop Rules in Grafana Cloud

By ⚡ min read

Introduction

If you're managing a platform or observability team, you know that log noise is more than just an annoyance—it's a budget drain. Health check pings, forgotten DEBUG statements, and verbose INFO logs from rarely used services often inflate your bill without adding value. The challenge has always been getting rid of them efficiently without cumbersome infrastructure changes. With the new drop rules feature in Adaptive Logs (currently in public preview), you can now define custom rules to discard low-value logs before they ever reach Grafana Cloud Logs. This how-to guide walks you through the entire process, from prerequisites to advanced tips, so you can start saving money and reducing clutter immediately.

How to Filter Out Noisy Logs Using Adaptive Logs Drop Rules in Grafana Cloud

What You Need

  • A Grafana Cloud account (Stack tier that supports Adaptive Logs)
  • Administrator or editor permissions for the Adaptive Logs section
  • Basic familiarity with log labels and log levels (e.g., DEBUG, INFO, ERROR)
  • Access to the Grafana Cloud UI (or the ability to use the API for rule creation)

Step-by-Step Instructions

Step 1: Navigate to the Adaptive Logs Section

Log in to your Grafana Cloud portal. From the main menu, go to Adaptive Logs. If you don't see it, ensure your stack has the feature enabled (it's in public preview, so you may need to contact support). Once inside, you'll see three tabs: Exemptions, Drop Rules, and Recommendations. We'll focus on the Drop Rules tab.

Step 2: Understand the Evaluation Order

Before creating a rule, it's crucial to know how log lines are processed. When a log arrives, the system evaluates it in this order:

  1. Exemptions – Protected logs pass untouched. If a log matches an exemption, no further sampling or dropping is applied.
  2. Drop Rules – Your custom rules are checked in priority order. The first match applies its drop percentage.
  3. Patterns – Optimization recommendations (like automatic pattern detection) apply to remaining logs that weren't exempted or dropped.

This order means you can create layers of control: protect critical logs first, then drop unwanted ones, and finally let the system optimize the rest.

Step 3: Create Your First Drop Rule

Click the Create Drop Rule button. You'll be presented with a form that allows you to define the logic using:

  • Label selectors – Target specific services or environments (e.g., service=health-check).
  • Log level – Choose from DEBUG, INFO, WARN, ERROR, etc.
  • Line content – Match text patterns using regular expressions.

You can combine these criteria with AND/OR logic. For example, drop logs where level=DEBUG AND service=my-old-service.

Step 4: Set the Drop Percentage

Each rule has a drop percentage slider. Use 100% to completely discard matching logs, or a lower value (like 90%) to sample them. This is useful for logs you don't want to lose entirely—you just want to reduce volume. For instance, a batch job that repeats the same success log 10,000 times can be set to drop 99% of those lines while keeping a representative subset.

Step 5: Order and Prioritize Rules

Rules are evaluated in the order they appear in the list. Drag and drop to rearrange. Important: The first matching rule applies. So place more specific or higher-priority rules at the top. For example, if you have a blanket rule to drop all DEBUG logs, but you also want to keep DEBUG logs from a critical service, put the keep-rule (as an exemption) above the drop rule.

Step 6: Test and Validate

Before saving, use the Preview feature (if available) to see how many logs would be affected. You can also check the Estimated Savings panel. Once satisfied, click Save. The rule becomes active immediately, and from that point on, matching logs will be dropped before ingestion.

Step 7: Monitor and Iterate

After a few hours, revisit the Adaptive Logs dashboard to review volume changes. Look for any unexpected drops—maybe a rule caught important logs by mistake. Adjust drop percentages or refine label selectors as needed. Over time, you'll build a set of rules that keep your log pipeline clean and cost-effective.

Practical Examples

Here are three common scenarios you can implement with drop rules:

  • Drop all DEBUG logs: Create a rule with log level = DEBUG and drop percentage = 100%. This instantly eliminates all verbose debug output across your entire infrastructure.
  • Sample repetitive logs: If a particular service (e.g., batch-processor) generates thousands of identical lines per minute, create a rule with label selector service=batch-processor and drop percentage = 90%. This keeps 10% of the logs for troubleshooting while cutting volume by 90%.
  • Target a noisy producer: A service that recently started emitting high-volume, low-value logs. Combine a specific label selector (e.g., namespace=noisy-ns) with a log level filter (e.g., INFO) and a line content match (e.g., heartbeat). Set drop to 100%.

Tips for Success

  • Start small: Create rules with low drop percentages (e.g., 10%) first to measure impact before going to 100%.
  • Use exemptions wisely: Protect logs required for compliance or debugging (e.g., ERROR logs from critical services) by adding exemptions. These override drop rules.
  • Combine with recommendations: Drop rules work alongside Adaptive Logs' automatic pattern detection. Use rules to remove known noise, then let the system optimize the rest.
  • Review regularly: As your services evolve, so does log behavior. Revisit your drop rules monthly to ensure they still align with your needs.
  • Document your rules: Add descriptions to each rule so your team understands why and when a rule was created.
  • Take advantage of the preview: Before saving, always run a preview to avoid accidentally dropping important data.

By following these steps and tips, you'll transform your log management from a noisy, expensive headache into a lean, efficient system. Adaptive Logs drop rules put control back in your hands—no more toilsome change requests or bloated bills.

Recommended

Discover More

Decoding the FISA 702 Reauthorization Stalemate: A Step-by-Step Guide to the Reform ProcessStack Overflow Founder Transitions to Sabbatical as New CEO Prashanth Chandrasekar Takes Command10 Critical Facts About PFAS Contamination in Infant Formula7 Essential Tips for Reviewing AI-Generated Pull Requests in 2026The Untold Story of Creatine: More Than Just Muscle Fuel