Skip to content

Optimizing Power Platform requests

I have written about Power Platform requests in my blog. What they are and what kinds of limitations are associated with them. Now that we know they are not an inexhaustible resource, we can consider how to limit their numbers.

Power Apps

When talking about Power Platform requests, Power Apps is quite safe. As long as you are careful with queries (LookUpFilterCountRows, etc.) within repetitive elements (gallery and ForAll function).

The Power Apps in the image below has a gallery showing orders (Demo_Orders). For each order, the number of rows associated with it is displayed. The maker has taken a shortcut, and has decided to retrieve the rows associated with each order separately for each row in the gallery.

CountRows(Filter(Demo_OrderRows, OrderID = Text(ThisItem.ID)))

This kind of solution is poison to the performance of Power Apps because it generates a huge number of requests.

From the Power Apps Monitor, we can see what happens when the information needed by the gallery is loaded. 1380 events in the Network category!

But what would be the correct way to implement this? 

The number of rows should be stored in an order record, so there would be no need to make a heavy query inside the gallery. If the data warehouse is Dataverse, the Rollup field can be used for this purpose. In other cases, the number of rows associated with an order should be saved in the order whenever it is edited.

If there are (always) less than 2000 order rows in the entire system, they can be retrieved to a local collection and this collection can be referenced within the gallery. In this way, determining the number of rows in a single order does not generate requests to the data warehouse.

Power Automate

When using flows, it is much easier to inadvertently do something that generates an absurd number of requests.

Starting a flow (Trigger)

The very first thing to do is to pay attention to how a flow starts. The more often it is executed, the more requests are generated. If the flow starts on a scheduled basis, there is a significant difference in whether it starts on Fridays once a day, once an hour, or once every half hour.

When creating a new scheduled flow, the default value for the timing is once per minute. What a terrible idea.

What would this kind of flow mean in terms of costs? It starts 60 * 24 = 1440 times a day. A flow always contains at least one Action, so at the very least, such a flow generates 2880 requests per day.

The Microsoft 365 license for a user includes 6000 requests per day.

Trigger Condition

If the flow starts from a change in the data warehouse (a new record is created, someone edits an existing record etc.), it is often a good idea to take advantage of the Trigger Conditions. They are used to define to only start the flow based on certain conditions, based on for example if the status information of a row changes.

With the SharePoint connector, this setting can be found in the trigger settings.

With Dataverse this can be found a bit more easily.

Filtering (Filter Query)

The biggest request generator in Power Automate are loops. For example, let’s get all order rows and examine the status of each row separately.

In many cases, a solution like the one below is utilized.

If we have 5,000 order rows, and only one of them is new, executing this flow will cause 10 003 requests.

When searching for rows, always search only for the rows of interest. In our example, rows with the status “New”. This is accomplished by adding a filter in Filter Query.

Now the flow generates 4 requests instead of 10 003. And the end result is the same.

Unnecessary Compose actions

Sometimes the fields we want to use are not found in the dynamic content list. The reason for this is often that they are not the same data type as the target field.

An easy solution is to place the desired field inside the Compose action and use its output. No need to think about data types.

Sometimes we also want to manipulate the data we store. This easily leads to the following solution:

  • Retrieve the Creation Date to Compose
  • Add x days to it (addDays)
  • Format the generated date to the appropriate format (formatDateTime)
  • The generated date is then used in other actions

Both of these have the same problem. If we go through a large number of rows, the compose actions generate a huge number of unnecessary requests inside the loop.

The formulas contained in the Compose actions can be written at once directly in the field.

In our example case, like this.

formatDateTime(addDays(items('Apply_to_each_item')?['Created'],20),'yyyy-MM-dd')

The innermost loop can also be referenced more lightly with item ().

formatDateTime(addDays(item()?['Created'], 20), 'yyyy-MM-dd')

Expand Query

Let’s imagine that are storing Deviations in Dataverse. Once a day, we want to send an email notification to the person who created the deviation (Created By) of any deviations that are still in the “new” status.

We will properly filter for deviations ( cr59f_deviationstatus eq 441700000 means Status = “New” ), after which we will send the email. But because the creator of the record is a Lookup field, it can only be found in the deviation record in the creator’s identifier (guid) format. This means that for each row, we need to retrieve the user from the Users table to get their email address. Boring.

We may have to retrieve other information too in a similar fashion.

This is an unnecessary step, as we can extend the original query (Expand Query) so that the result set also contains fields from the relations.

For example, the email address of the creator of the record that we are missing.

createdby($select=internalemailaddress)

This is how we get rid of one query within a loop.

“Everyone cleans up after themselves”

Let’s finish with something a little more creative.

Suppose we have a solution with 1000 users. Each user creates 10 new records every day. A total of 10 000 rows are created daily. We want to archive records that are older than 30 days.

Easy. We schedule the following kind of flow to run once a day.

But… deleting 10 000 records takes 20 002 requests. That’s a lot. In practice, you have to buy more request capacity for Flow because hardly any user has so many requests at their disposal.

Let’s approach archiving from a completely different angle.

Let’s make app users clean up their own records. In practice, the requests needed for cleaning are distributed among the 1000 users. The easiest way to do this is when launching the app (App OnStart).

So, let’s retrieve and delete records that were created by the user more than 30 days ago.

Set(varCurrentUser, User());
Set(varDate30DaysAgo, DateAdd(Now(),-30,Days));
ClearCollect(colMyOldItems, Filter('SharePoint list limits', 'Created By'.Email =   
             varCurrentUser.Email And Created < varDate30DaysAgo ));
Remove('SharePoint list limits',colMyOldItems);

If the user uses the application daily, a maximum of 10 rows are always deleted. This generates 11 requests.

Of course, if the user has not used the application for a while, there might be 300 records waiting to be removed.

A little more labor intensive but smarter solution is as follows.

Let’s make a flow triggered from Power Apps that deletes records created by the user that are more than 30 days old.

The email address of the person who triggered the flow found in triggerOutput is utilized in Filter Query.

Created lt '@{addDays(utcNow(), -30)}' and Author/EMail eq '@{triggerOutputs()?['Headers']['x-ms-user-email']}'

Now when Power Apps starts, this flow is started. Deleting old records is not visible to the user in any way.

The idea can be further refined. An 8 hour delay is added to the beginning of the flow. In this way, deleting rows does not strain the data warehouse during office hours, but rather after them. The exact start time depends on when the user started the application.

The cleanup flow we built starts every time the app is launched. How could we limit it to start at most once a day?

Let’s create a list of application users (User Profile). The list maintains user specific information about when the flow was last executed (Old items deleted).

At the beginning of our flow, let’s update the new timestamp (utcNow ()) to the user profile list.

Finally, let’s update PowerApps to read the user profile information at startup, to only start the flow if it has not been started today.

Of course, we can evaluate if this method actually saves enough requests that it makes sense to implement it.

This sums up the pain associated with the Power Platform request limits.

When does it make more sense to just buy more request capacity vs. optimizing the solution? Optimization has its downsides as well. At its worst, it leads to solutions that are more difficult to understand (= maintain).

On the other hand, managing extra capacity and licenses costs too.

The safest route is somewhere in the middle ground. Optimize off all of the nonsense, but in a way that allows the solutions to still be easily understood. And the rest is settled with purchasing additional capacity.

DataverseFlowPower AppsPower AutomatePower PlatformRequest limits

2 responses to "Optimizing Power Platform requests"

Leave a Reply

Your email address will not be published. Required fields are marked *