Skip to content

Application Insights and Power Automate

One of Power Automate’s weaknesses is its monitoring. So the lack of it. We often want to be aware of, for example, the following things

  • Are there flows in some environments that are executed a lot (generating a lot of Power Platform requests)?
  • Are there any flows with abnormal execution rates (e.g. flow stuck in a loop)?
  • Have flows in production environments ended up in error? Which ones?
  • Has flow x been run even once today?
  • etc

The basics. But mining this information, let alone reacting automatically, hasn’t been simple.

Information is available. From the Power Platform administration center, you can see, for example, information about the execution of flows.

For each flow, in addition to the run history, we see analytics on usage and errors.

The owner of flow receives emails about various events. Of course, at worst a week late.

Because of this, a separate section is often added to the critical flows, which reports the error to the agreed channels.

This only solves one small part of the challenge. In addition, over the years it has been possible to build hundreds of flows without a uniform notification mechanism. A nice task to add it to every one of them.

Oh, if only we could get all flow events on a proper monitoring platform (Application Insights). Then we could build everything necessary for monitoring ourselves.

We can stop dreaming because this is now possible.

Let’s see how to do it.

Connecting Power Automate to Application Insights

First, we need the Application Insights service in Azure. Let’s create one.

After that, in the Power Platform admin center, you can define Power Automate events to be exported into Application Insights.

In practice, select Data export (2) under Analytics (1). Let’s go to the App Insights tab (3) and create a new data export (4).

Name the package and select Power Automate as the content. The package may contain information about

  • Cloud flow runs
  • Cloud flow triggers
  • Cloud flow actions

Next, we select the environment whose events we are interested in. One package contains the events of one environment. The environment must be a managed environment so that events can be transferred at all.

Finally, we pick Application Insights we like to use.

We are ready. After a short delay, the events will start appearing in Application Insights.

Application Insights

Application Insights is Azure’s service for web application monitoring, troubleshooting and usage tracking. If it is a new acquaintance, you can first read my previous post about how to utilize it with Power Apps.

The Metrics tool is an easy way to get started. Flow runs can be found under Server requests and information about triggers and functions can be found under Dependency calls.

However, we are interested in making our own queries to the log. In queries, flow runs can be found in the requests table and information about triggers and functions in the dependencies table.

Quite a lot of data is saved from the events.

Let’s make a few example queries.

Failed runs per flow

We are looking for unsuccessful runs. Runs are presented separately for each flow.

| where customDimensions ['resourceProvider'] == 'Cloud Flow'
| where customDimensions ['signalCategory'] == 'Cloud flow runs'
| where success == false
| extend Data = todynamic(tostring(customDimensions.Data))
| extend FlowName = tostring(Data["FlowDisplayName"])
| summarize FailedCount = sum(itemCount) by FlowName

We immediately see that one flow has been failed several times.

All runs

All runs are retrieved and the daily numbers are presented over time.

| where customDimensions ['resourceProvider'] == 'Cloud Flow'
| where customDimensions ['signalCategory'] == 'Cloud flow runs'
| summarize RequestCount = sum(itemCount) by bin(timestamp, 1d)
| render timechart 

This way we can monitor if there are any deviations in total amount of runs.

Power Platform requests

If we are interested in the Power Platform requests generated by the environment’s workflows, we can estimate their number by adding up the number of executed operations and executed triggers.

The result is not the exact truth, but an estimate is better than nothing.

| where customDimensions['resourceProvider'] == 'Cloud Flow'
| where (customDimensions['signalCategory'] == 'Cloud flow triggers' and success == True) or customDimensions['signalCategory'] == 'Cloud flow actions'
| summarize TotalCount=sum(itemCount) by bin(timestamp, 1d) 
| render timechart 

Something happened on the 27th that might be worth investigating.

Has flow x run today?

We have a flow that runs every morning. The execution of that flow is critical for the rest of the day’s operations.

Tracking the errors of that flow is not enough. Flow may not have been run at all. The trigger’s connection has expired, the trigger has not been fired at all, flow has been suspended etc.

All of these should be caught. A query is made that counts the number of successful runs of that flow in the last 24 hours.

| where customDimensions ['resourceProvider'] == 'Cloud Flow'
| where customDimensions ['signalCategory'] == 'Cloud flow runs'
| where customDimensions ['resourceId'] == 'd0483697-7254-0e21-a335-f95c36c69427'
| where success = true
| where timestamp > ago(1d)
| summarize RunCount=sum(itemCount)

The number should always be one. If not, there is a reason to react.

The example queries are very simple. The query language KQL (Kusto Query Language) used in Application Insights enables the creation of truly versatile queries.


Queries can be used with Azure dashboards. The easiest way to do this is to pin the query directly to the dashboard.

The first version of the Dashboard describing the state of the environment’s flow looks like the following.


What we are actually looking for, are automatic alerts. They can be created from the Monitoring -> Alerts section.

A new alert rule is created.

First, the signal that triggers the alarm is defined.

We can use ready-made signals such as

  • Failed requests (flow execution failure)
  • Server requests (flow executions)
  • Dependency call failures (failure of triggers or actions)
  • Dependency call (execution of triggers and actions)

Let’s make an alarm that monitors the flow’s failures. If even one flow has failed, an alarm is generated. The situation is checked every hour (check every) and the runs of the previous hour are always reviewed (lookback period).

The cost of the alarm is estimated at 10 cents per month. Not terribly expensive.

Next, we define what to do in the event of an alarm. This time an email is sent.

Finally, give the rule a name and description, and select a subscription and resource group for it.

Query based alerts

However, the real highlight are the alerts based on your own queries.

We make an alarm that is triggered if a specific flow has not been completed successfully within the previous 24h.

The emails sent by Alerts all look like this.


Finally, Power Automate can be monitored as it should be. Without adjustments made directly into flows. Monitoring can also be quickly built into an existing solution. Best of all, it is now possible to monitor cases that were previously challenging to monitor (e.g. workflow x has not been run at all).

Reports and alarm logic are always made as needed. Usually a few generic alerts and more specific and more frequent alerts for critical flows.

The feature is in the preview stage.

AMSApplication InsightsCloud FlowFlowmonitoringPower Automatesupport services

Leave a Reply

Your email address will not be published. Required fields are marked *