Azure Monitor Missing Metrics? 8 Fixes [2026]

exodata.io
Azure |Azure |Modern Workplace |Troubleshooting

Published on: 18 June 2025

Azure Monitor is one of the most powerful tools in the Microsoft cloud for tracking performance, diagnosing issues, and maintaining operational visibility. Whether your organization manages its own infrastructure or relies on cloud engineering expertise, Azure Monitor is central to staying on top of resource health. But what happens when the data you expect to see simply doesn’t show up?

Whether you’re troubleshooting a virtual machine, App Service, storage account, or container instance, missing metrics in Azure Monitor can slow down your team’s response and make root cause analysis harder than it should be.

This post walks through the most common reasons Azure Monitor metrics fail to appear—and how to fix them. For a general overview of the platform, see Microsoft’s Azure Monitor documentation.

Common Symptoms

  • Expected performance metrics like CPU, memory, or disk activity are blank

  • Log queries return data, but charts show nothing

  • Alerts are not triggering due to missing metric data

  • Custom metrics from applications are not surfacing in dashboards

What to Check First

1. Resource Type and Supported Metrics

Start by confirming that the resource type you’re monitoring actually supports the metric you’re looking for. Not all Azure services emit all metrics. For example, certain storage tiers or older SKUs may not publish metrics at all.

Use this official Microsoft reference to verify: https://learn.microsoft.com/azure/azure-monitor/reference/supported-metrics

2. Time Window and Resolution

Check the time window and granularity settings in your chart or query. If your resource was idle or newly deployed, a small time window might not show any metric activity. Expand your view to cover a broader time span, like the past 24 hours.

Also confirm the chart is set to an appropriate aggregation type (Average, Count, Total).

Deeper Troubleshooting Steps

3. Verify Monitoring Agent Deployment

Some metrics depend on agents or extensions to be collected. For example:

  • Virtual Machines often require the Azure Monitor Agent (AMA) or Log Analytics agent

  • Container Insights needs the monitoring extension installed on the cluster nodes

  • Custom Applications must explicitly push custom metrics via Azure Monitor SDKs or APIs

Use the Azure portal to confirm the correct agent is installed and running.

4. Confirm Diagnostic Settings

For resources like App Service, Storage Accounts, and Key Vault, metrics collection often depends on diagnostic settings being configured.

Check that:

  • The resource has a diagnostic settings profile enabled

  • Metrics collection is turned on

  • Output is directed to the correct Log Analytics workspace, Event Hub, or storage account

Navigate to the resource > Monitoring > Diagnostic settings.

5. Check for Role or Permission Issues

Azure Monitor respects RBAC. If your user account or service principal does not have access to the data, the metrics will appear missing even if they are being collected. Proper security and compliance practices should include regular audits of monitoring permissions alongside other access controls.

Ensure your account has:

  • Reader or Monitoring Reader access at minimum

  • Workspace permissions if viewing data in Log Analytics

Additional Tips

  • Use “Metrics Explorer” in the Azure portal for real-time metric validation

  • Enable Activity Log alerts to track if monitoring settings change unexpectedly

  • Use Azure Resource Graph to find which resources are missing diagnostics or agent extensions

  • Check API or SDK usage if custom metrics are being emitted from code

Diagnostic Commands: Azure CLI and PowerShell

When portal-based troubleshooting is not enough, Azure CLI and PowerShell provide deeper visibility into monitoring configurations and can help identify the root cause of missing metrics programmatically.

Verify Diagnostic Settings Across Resources

# List all diagnostic settings for a specific resource
az monitor diagnostic-settings list \
  --resource "/subscriptions/{sub-id}/resourceGroups/{rg}/providers/Microsoft.Compute/virtualMachines/{vm-name}"

# Find resources WITHOUT diagnostic settings using Azure Resource Graph
az graph query -q "
  resources
  | where type =~ 'microsoft.compute/virtualmachines'
  | extend hasDS = isnotnull(properties.diagnosticsProfile)
  | where hasDS == false
  | project name, resourceGroup, location
"

Check Azure Monitor Agent Health

# Verify AMA extension is installed and healthy on a VM
Get-AzVMExtension -ResourceGroupName "myRG" -VMName "myVM" |
  Where-Object { $_.ExtensionType -eq "AzureMonitorWindowsAgent" -or
                 $_.ExtensionType -eq "AzureMonitorLinuxAgent" } |
  Select-Object Name, ProvisioningState, ExtensionType

# List all Data Collection Rules and their associations
Get-AzDataCollectionRule -ResourceGroupName "myRG" |
  Select-Object Name, Location, @{N='Destinations';E={$_.DestinationLogAnalytic.WorkspaceResourceId}}

# Check DCR associations for a specific VM
Get-AzDataCollectionRuleAssociation -TargetResourceId `
  "/subscriptions/{sub-id}/resourceGroups/{rg}/providers/Microsoft.Compute/virtualMachines/{vm-name}"

Query Metric Availability Directly

# List available metrics for a resource (confirms what the resource actually emits)
az monitor metrics list-definitions \
  --resource "/subscriptions/{sub-id}/resourceGroups/{rg}/providers/Microsoft.Compute/virtualMachines/{vm-name}" \
  --output table

# Query a specific metric to verify data is flowing
az monitor metrics list \
  --resource "/subscriptions/{sub-id}/resourceGroups/{rg}/providers/Microsoft.Compute/virtualMachines/{vm-name}" \
  --metric "Percentage CPU" \
  --interval PT1H \
  --start-time 2026-01-01T00:00:00Z \
  --end-time 2026-01-01T23:59:59Z

Verify Resource Provider Registration

# Check if Microsoft.Insights is registered
az provider show --namespace Microsoft.Insights --query "registrationState" --output tsv

# Register it if needed
az provider register --namespace Microsoft.Insights

These commands are particularly useful when managing monitoring at scale across dozens or hundreds of resources. For organizations building Azure dashboards and workbooks, verifying metric availability programmatically is faster and more reliable than checking each resource individually through the portal.

Advanced Diagnostic Scenarios

Beyond the common causes covered above, several less obvious scenarios can lead to missing metrics in Azure Monitor.

Metric Namespace Mismatches

Azure resources can emit metrics under different namespaces. For example, a virtual machine emits host-level metrics under Microsoft.Compute/virtualMachines, while guest-level metrics collected by the Azure Monitor Agent appear under azure.vm.windows.guest or azure.vm.linux.guest. If you are looking for memory utilization in Metrics Explorer but have selected the wrong namespace, the chart will appear empty even though the data exists.

When building dashboards or alert rules, always verify that you are targeting the correct metric namespace. Platform metrics (host-level) are available without any agent, while guest metrics require the AMA and a properly configured DCR.

Metric Aggregation Type Conflicts

Each Azure metric supports specific aggregation types: Average, Sum, Count, Minimum, and Maximum. If you configure an alert rule or dashboard tile to use an aggregation type that the metric does not support, the result will be empty. For example, querying the “Total” aggregation on a metric that only supports “Average” will return no data. Check the supported metrics reference to confirm which aggregation types are valid for your metric.

Custom Metrics Not Surfacing from Application Code

If your application emits custom metrics via the Azure Monitor SDK or REST API, several conditions must be met for those metrics to appear:

  • The custom metric must be emitted to the correct regional endpoint matching the resource’s location
  • The metric namespace, name, and dimensions must be consistent across all emissions
  • The Application Insights resource or Azure Monitor workspace must be configured to accept custom metrics
  • The identity or API key used to emit metrics must have the Monitoring Metrics Publisher role on the target resource

A common mistake is emitting custom metrics to an Application Insights resource in a different region than the application, which causes the data to be silently dropped.

Alerting Best Practices to Avoid Silent Failures

Missing metrics are particularly dangerous when they cause alerts to fail silently. An alert rule that depends on a metric that is not being collected will never fire, giving the false impression that everything is healthy.

Configure “No Data” Alert Behavior

When creating metric alert rules, configure the behavior for periods when no data is received. Azure Monitor allows you to choose between treating no data as a violation (fires the alert) or treating it as healthy (suppresses the alert). For critical infrastructure metrics, always set “no data” behavior to fire the alert. This ensures that if a VM stops emitting metrics because the agent crashed or the diagnostic settings were deleted, you are notified immediately rather than discovering the gap during an incident.

Use Action Group Testing

After creating or modifying an action group, test it by manually triggering a test notification. Navigate to Monitor > Alerts > Action groups, select the group, and click Test. This confirms that email, SMS, webhook, or Logic App integrations are functioning correctly. We have seen organizations where alerts were correctly configured but the action group pointed to an expired email distribution list or a decommissioned webhook endpoint.

Layer Alert Rules for Defense in Depth

Do not rely on a single alert rule for critical resources. Create layered alerts that monitor the same resource from different angles:

  • Platform metric alert: CPU above 90% for 5 minutes
  • Log-based alert: Heartbeat signal missing for more than 5 minutes (detects agent or VM failure)
  • Activity log alert: Diagnostic settings deleted or modified

This layered approach ensures that even if one monitoring path fails, the others provide coverage. Organizations building out their monitoring strategy can benefit from working with a managed cloud partner to design alert architectures that balance coverage with alert fatigue.

Common Pitfalls That Cause Metric Gaps

Even experienced Azure administrators run into these recurring issues. Knowing them in advance saves hours of troubleshooting.

Mismatched Data Collection Rules

With the transition from the legacy Log Analytics agent to the Azure Monitor Agent (AMA), many organizations end up with incomplete or misconfigured Data Collection Rules (DCRs). A DCR defines which performance counters, event logs, and custom metrics the agent should collect and where it should send them. If a DCR is missing a specific counter, such as memory utilization or disk queue length, that metric will not appear in your workspace even though the agent is healthy and running.

Review your DCRs in the Azure portal under Monitor > Data Collection Rules. Confirm that each rule covers the performance counters your dashboards and alerts depend on. We frequently see organizations that migrated from the legacy agent but only partially recreated their counter configurations in the new DCR format.

Workspace Quota and Retention Limits

Every Log Analytics workspace has a daily ingestion cap. If your environment generates more data than the cap allows, Azure will stop ingesting new data for the remainder of the day. When this happens, metrics that flow through the workspace will appear to vanish without any obvious error message.

Check your workspace’s daily cap setting under Log Analytics workspace > Usage and estimated costs > Daily cap. For production environments, we recommend either removing the cap entirely or setting it high enough that normal operations never trigger it. If cost control is a concern, a better approach is to filter out low-value data at the DCR level rather than relying on a blunt ingestion cap.

Resource Provider Registration

Some Azure Monitor features require the Microsoft.Insights resource provider to be registered on the subscription. If it is not registered, diagnostic settings may fail silently, and platform metrics may not be emitted. Navigate to Subscriptions > Resource providers and confirm that Microsoft.Insights shows a status of “Registered.” This is an easy check that is often overlooked during initial cloud engineering setup.

Cross-Region and Cross-Subscription Visibility

If you manage resources across multiple Azure subscriptions or regions, confirm that your monitoring queries are scoped correctly. A Metrics Explorer chart that is filtered to a single subscription will not display metrics from resources in another subscription, even if those resources send data to the same Log Analytics workspace. Similarly, Azure Monitor workbooks that reference a specific workspace will only surface data that has been routed to that workspace.

For organizations with complex multi-subscription architectures, we recommend using Azure Lighthouse or Azure Monitor workspace-based architecture to consolidate visibility without duplicating data.

Monitoring Best Practices

Fixing missing metrics is one thing. Preventing them from going missing in the first place is a better investment of time and budget.

Standardize diagnostic settings with Azure Policy. Create a policy definition that automatically applies diagnostic settings to every new resource of a given type. This ensures that when a team member deploys a new App Service or Storage Account, metrics collection is configured from day one without manual intervention.

Build a monitoring health dashboard. Use an Azure Workbook or Grafana dashboard that tracks the health of your monitoring infrastructure itself. Include panels that show agent heartbeat status, DCR assignment coverage, workspace ingestion volume, and diagnostic settings completeness. If a gap appears, you will see it on the dashboard before it affects an alert or SLA report.

Test alerts regularly. An alert that has never fired may have a broken action group, an incorrect threshold, or a metric path that no longer exists. Schedule quarterly alert validation exercises where you deliberately trigger conditions to confirm the full pipeline works from metric collection through to notification delivery.

Document your monitoring architecture. Maintain a living document that maps each critical resource to its diagnostic settings destination, the DCRs that govern its agent, and the alerts that depend on its metrics. This documentation is invaluable during incident response and makes onboarding new team members significantly faster. A managed IT services partner can help establish and maintain this documentation as part of an ongoing operational engagement.

When to Engage Support or a Partner

If metrics are not appearing and you have validated the above, the issue may be related to a backend ingestion delay or an internal configuration conflict. Microsoft support can validate telemetry pipeline status and quota issues.

Alternatively, a partner like Exodata can help design and maintain a proactive monitoring foundation that ensures data is always available when your team needs it. We help organizations build security and compliance into their monitoring strategy from the start, ensuring that observability data is both complete and properly governed.

Frequently Asked Questions

Why are my Azure Monitor metrics missing?

Missing metrics most commonly result from unsupported resource types or SKUs, misconfigured diagnostic settings, or a missing monitoring agent such as the Azure Monitor Agent (AMA). Start by verifying that your resource emits the metric you expect using az monitor metrics list-definitions, then check that diagnostic settings and agent installations are in place.

How do I enable diagnostics in Azure?

Navigate to your resource in the Azure portal, then go to Monitoring > Diagnostic settings. From there, create a diagnostic settings profile, select the metrics and logs you want to collect, and choose a destination such as a Log Analytics workspace, Event Hub, or storage account. You can also automate this with Azure Policy to ensure every new resource gets diagnostic settings from day one.

What permissions do I need to view Azure Monitor metrics?

At minimum, you need the Reader or Monitoring Reader role on the resource. If you are querying data in a Log Analytics workspace, you also need appropriate workspace-level permissions (Log Analytics Reader). For emitting custom metrics from application code, the identity needs the Monitoring Metrics Publisher role. RBAC misconfigurations are a frequently overlooked cause of metrics appearing to be absent.

How long does it take for Azure Monitor metrics to appear?

Platform metrics for most Azure resources are available within one to two minutes. However, guest-level metrics that depend on the Azure Monitor Agent may take longer, especially if the agent was recently installed or the data collection rule is still propagating. Ingestion delays on the backend, while uncommon, can also cause temporary gaps.

How do I prevent Azure Monitor metrics from going missing in the future?

Use Azure Policy to automatically apply diagnostic settings to new resources. Deploy standardized Data Collection Rules across your agent fleet. Build a monitoring health dashboard that tracks agent heartbeat, workspace ingestion, and DCR coverage. Test your alerts quarterly to confirm the full notification pipeline is functioning. Configure “no data” behavior on critical alert rules so you are notified when metrics stop flowing.

What is the difference between platform metrics and guest metrics in Azure?

Platform metrics are collected automatically by Azure from the host infrastructure and do not require any agent. Examples include VM CPU percentage, disk IOPS, and network bytes. Guest metrics require the Azure Monitor Agent to be installed inside the VM and a Data Collection Rule to define which performance counters to collect. Examples include memory utilization, disk queue length, and process-level CPU. The two types of metrics appear under different namespaces in Metrics Explorer, which is a common source of confusion when charts appear empty.

Final Thoughts

Metrics not showing up in Azure Monitor is a common issue, but it usually traces back to a handful of predictable causes. With the right diagnostic mindset and familiarity with the platform’s requirements, these gaps can be resolved quickly. The real win comes from building a monitoring strategy that prevents gaps from occurring in the first place through standardized configurations, proactive health checks, and thorough documentation.

Looking to standardize your Azure monitoring strategy? Our team can help you build observability into every layer of your managed cloud stack from VM agents and workspace configuration to custom dashboards and alert rules.


Stop chasing missing metrics and start monitoring with confidence. Contact us to schedule a consultation and build a reliable Azure monitoring environment your team can depend on.