Azure Monitor is a comprehensive solution for collecting, analyzing, and acting on telemetry data from both your cloud environments and your on-premises environments.
Azure Monitor collects data from 6 sources
This data includes tenant-wide services, such as Azure AD.
This includes Activity Log data and Service Health data for the resources in your subscription.
This includes Resource Logs and/or Metrics for the individual resources themselves in Azure. This is available for most Azure services. The options available will vary by resource type. For example, some resources may allow you to monitor both Resource Logs and Metrics, whereas some resources may only support Resource Logs.
Operating System / Guest-level
This data includes information taken from within the Operating System of a server. Some examples are performance counters, syslog data, event log data, crash dumps, and more.
You must install agents/extensions on your servers in order to collect this data. This can get quite confusing, as the current state of monitoring agents in Azure is quite a mess. For example, you may need to install up to 4 different agents on a Linux server, depending on your specific needs. Each agent would collect a different set of data and send it to a different destination.
Microsoft is attempting to simplify and solve this problem by introducing a new agent named the Azure Monitor Agent (AMA) that is supposed to replace all of the previously mentioned agents. The idea being that the AMA is the only agent you would need to install. However, Microsoft is actively working on updating this agent, and as of now it does not have all of features found on the previous agents.
Detailed application monitoring in Azure Monitor is done with Application Insights, which collects data from applications running on a variety of platforms. The applications you monitor can be running anywhere: in Azure, another cloud, or on-premises.
You could also send your own custom data to Azure Monitor by way of the Azure Monitor API.
Data stored in Azure Monitor
Logs can store a variety of data types that have their own structures.
On the back end, Log data is stored inside Tables in one or more Log Analytics Workspaces (LAW). By default, these Tables use the 'Analytics' plan. But, certain Tables can also be switched to use the 'Basic' plan, instead. The plan you select dictates quite a few settings, which are described below. Note: not all Tables in LAW support being switched to the 'Basic' plan.
Data Ingestion Costs
Much smaller Data Ingestion Costs
'Interactive' Data Retention of 31 days (or 90 days for some services)
'Interactive' Data Retention of 8 days
'Interactive' Data Retention configurable
(max 2 years)
'Interactive' Data Retention fixed
(8 days only)
Full KQL Query support
Limited KQL Query support
No costs to run KQL Queries
Costs to run KQL Queries
Log Alerts supported
Log Alerts not supported
One other type of Log that can be stored in LAW is 'Archive' Logs. Archive Logs are configured per-Table in LAW. You specify Archive Logs by configuring a 'Total' retention period that is greater than your 'Interactive' retention period. So, for example, say you configured a table with 30 days of Interactive retention, and 90 days of Total retention, then that means you would have 60 days of Archive Logs. In other words, Total retention minus Interactive retention give your the amount of Archive Logs that would be stored. The Total retention period can be configured for a max of 7 years! See example below.
You must understand that Archive Logs do NOT directly support KQL Queries at all! To get around this you have 2 options.
Option 1 is to perform a Search against your Archive Logs, and copy the selected data into a new Table. Your search against the Archive Logs can be for a max of 1 year's worth of data. The new Table will have a name with a suffix of '_SRCH', it will use the 'Analytics' Plan, and it will be of Type 'Search results'. When the data is finished copying to this Table you can then perform your full KQL Queries against the data. You will pay twice for this option, once for the amount of data searched in the Archive Logs, and again for the ingestion into the new Table.
Option 2 is to perform a Restore against your Archive Logs, which will also copy the data into a new Table. You can restore up to 60 TB worth of data this way. The new Table will have a name with a suffix of '_RST', it will use the 'Analytics' Plan, and it will be of Type 'Restored logs'. There is NO Interactive retention period configured on this new Table. It will stay in LAW forever until either you delete the Table, or the Archive Log data gets too old and goes past its Total retention period (remember, this could be years).
Finally, since data is being stored in a LAW, this brings up a few things you must decide, as a LAW defines the geographic location of the data, the permissions for who can access the data, and the default data retention and cost settings. Maybe a single LAW is sufficient for all of your log data, maybe you need two or more to meet your particular requirements. Please reference Microsoft's guide on how to Design your Azure Monitor Logs deployment for more information.
Metrics are simple numeric data, and they are more lightweight than Logs.
Metrics are stored in a fully managed, time-series database. In other words, you don't have to worry about the back end storage with Metrics as Microsoft manages that for you.
Metrics are capable of supporting near real-time scenarios making them particularly useful for alerting and fast detection of issues.
What can Azure Monitor do with this data?
Use Metrics Explorer to analyze your Metrics data on a chart and compare metrics from different resources.
Use Log Analytics to write log queries and interactively analyze log data by using a powerful analysis engine. This uses the Kusto Query Language (KQL).
Pin your Log Analytics query results or Metrics Explorer charts to an Azure Dashboard.
Create an Azure Monitor Workbook and combine multiple sets of data into an interactive report.
Export query results to Power BI, allowing you to use powerful visualizations and to share with users outside of Azure.
Respond / Automate
You can create a Log Alert Rule or a Metric Alert Rule that can send a notification or take an automated action based on data thresholds that you set. The automated actions can be very powerful and include things like Azure Functions and Azure Logic Apps.
Metrics can be used to automatically autoscale Azure resources. For example, maybe you configure a Virtual Machine Scale Set so that it autoscales when the CPU Metric is higher than 75%.
I think I'll wrap it up there. I feel happy that I've covered most of the high-level points of the Azure Monitor service.
I may do a deep-dive article into the various server Agents and Extensions. But, as I said above, it's quite a mess right now and would probably be a very dense and complicated article.