You know it’s going to happen eventually. You’ve got your important business process data nicely lined up in a Power Apps or Dynamics 365 based application, safely stored in the relational database of Microsoft Dataverse. Your automations and analytics systems couldn’t be happier. Then the human factor comes along and someone clicks the wrong button.
Whether you delete or overwrite data accidentally, the end result can be the same. It’s actually more likely that you’ll notice the deletion operation when records go fully missing. That’s exactly what happened to us a while ago in our own Business Forward app that runs our everyday operations. Not a huge disaster in terms of record deletion volume, but still something that we needed to fix ASAP.
Moments like this are an excellent opportunity to learn about the tools and processes related to data backup and restore in Power Platform. So, here’s a story of what we did when data was suddenly lost – for a moment.
Weighing our options: could we just roll this back?
The good news is, all the data in your Power Platform environment’s database is backed up automatically by Microsoft. System backups are retained for either 7 days (Power Apps and sandboxes) or up to 28 days (Dynamics 365 production environment). We can go back to any point in time in 30 minute intervals during this period. In our case, we noticed the record deletion 2 hours after the event, so only a small hop back in time would be necessary.
“Let’s just roll the production environment back a couple of hours and everything will be OK again.” Well, if you could stand the loss of any data entries made after the restore point, that would of course be an option. The fundamental problem here of course is that the Dataverse backup always contains the entire database. There is no built-in way to restore records just from a specific table, rather it’s all or nothing.
If this environment was just a single CRM database like things used to be back in the days before Power Platform came along, then the direct restore operation on top of an existing database might be a feasible option. Not a good option, as there’s no going back after clicking the Restore button, but as always, you’ve got to balance the business criticality and the cost of time & effort involved.
Now, in our case the environment was our Power Platform production environment. This means it doesn’t just contain the Dataverse tables and the Model-driven Power Apps. There’s a wealth of internal tools built on Power Apps Canvas apps and Power Automate cloud flows that run in this system. What would happen to those elements if we’d restore the environment database?
Looking at Microsoft’s documentation on backup and restore, there’s a list of validation steps you need to take to ensure flows are working as expected after environment restore. Flows may be stopped, triggers and actions may need to be adjusted. Connection References will require new connections. Custom Connectors may need to be deleted and reinstalled. Oh, and speaking of Power Apps Canvas apps, their app IDs will be different in the restored environment, thus breaking bookmarks, embeds, and a whole host of other things. As the icing on the cake, also apps shared with everyone need to be re-shared.
Oh wow. This is much more complex than it was back in the XRM days. We’d easily spend the whole day fixing everything in our production environment, and we’re just a small team of 11 people and “a few” apps & flows. Restoring on top of the existing environment doesn’t sound like a reasonable option anymore.
Exploring tools outside of Power Platform
Time to think about our alternatives. Could we possibly have these missing records stored somewhere outside of Dataverse? Well, we do have the Azure Synapse Link standard integration configured, which pushes records from our production Dataverse key tables into an Azure Data Lake Storage Gen2 account. Could we reach out to the Lake and grab the historical records from there?
Unfortunately, the answer was: no. Now, it’s not that the Azure Synapse Link feature couldn’t technically be configured to preserve the historical data. It can be, but in our small PoC deployment we hadn’t yet changed any of the default settings. This meant that the missing records were also erased from the Data Lake, by design:
“All create, update, and delete operations are exported from Dataverse to the data lake. For example, when a user deletes an Account table row in Dataverse, the transaction is replicated in the destination location.”
The journey continues then… What about the audit log in Dataverse? Could we pull the deleted records from the audit history? There’s no built-in feature from Microsoft that allows you to restore data based on the record change log, but as always, there are great community tools out there that address these gaps. The XrmToolBox Recycle Bin plugin gives you a convenient UI to browse deleted records and attempt to recreate them from the audit log data:
Unfortunately here again the default settings became an issue. You see, we needed to restore data from a custom table where the auditing option had not been enabled, like in the above visualized WorkOrder custom table. Again, not an all an unusual situation, since as a best practice you don’t want to be pushing absolutely everything in the Dataverse tables into your audit logs. That would both eat up the precious Dataverse log capacity ($10/GB/month) as well as cause potential performance issues.
Restoring to a new environment
We decided that it would be perfectly fine for us to copy the missing data back to production from an Excel export, so we proceeded with restoring the production environment’s backup into a new sandbox environment. This mean that first we’d need to check that we have enough storage capacity available for this operation, meaning the amount of Dataverse database, log and file capacity consumed by our production. Yep, all good!
This is a good reminder about the need for having free Dataverse storage capacity available in your tenant. If you’ve got a huge production database you need to restore the backup for, then you may be blocked from doing so until you’re able to either purchase more capacity of free it up by cleaning up other environments. This can be a nasty surprise if there’s a direct business impact to the unavailability of the backed up data. At least you should consider setting up the new pay-as-you-go Azure based licensing model as a Plan B.
Another thing to consider is the time it will take for your restore process to complete. Even though our production database was only 6 GB in size, the restore process was stuck in the “run” phase for close to one hour. If you’ve got a production environment that runs in the hundreds of gigabytes, it might be a good idea to test how long creating a copy of it takes, so you can prepare your backup & restore plan of activities accordingly.
Once the new environment was created, it was in administration mode and available for the system admins to log in there. We were able to quickly identify the missing records and after 10 minutes of some Power Query magic the data had been restored to the production environment tables.
Preventing future data loss
It’s never a bad idea to test your backups and plan how to reach when disaster strikes. In this particular case, we learned about the dangers of what overwriting an existing environment with a database backup could do in relation to Power Apps and Power Automate components. We also identified new ways to ensure we have the historical data stored in Azure Data Lake, as well as the good ol’ audit log data that could be accessed for restore purposes via community tools.
Knowing how to recover from errors is important, but planning how to proactively reduce the chances for errors is even more valuable. In this scenario where records were accidentally deleted, we should of course ask ourselves “should users even be able to delete records?” In many cases, the answer might be that a better practice would be to only allow record deactivation rather than hard deletes that physically remove the data. Or at least restrict the rights for deletion to the user’s own records.
Right, so all we need to do is adjust the security roles in our Dataverse environment a bit and the problem is solved… Well, if only it was that easy. We quickly discovered certain dependencies whereby it wasn’t all that easy to remove the Power Platform administrator role from all normal user accounts. Which in turn means that these users will have system administrator level rights to all Dataverse environments within the tenant – even the production environments that should have more safeguards around them than dev/test/demo environments.
We don’t manually deploy solutions into our production environment, rather we leverage Azure DevOps and the Power Platform Build Tools to automate this process. The pipelines are configured to use a service principal, so we don’t really need any other system administrator accounts for updating our production apps. All good on this side.
Unfortunately, there are other important environments in our tenant, besides just our FF Production. You see, back in Spring 2020 when we were a small team of 5 persons, someone went ahead and deployed the CoE Starter Kit using his personal credentials (yes, that “someone” was me?). Now, since the sync flows in the CoE need to have full access to all the Power Platform resources in the tenant via the Power Apps APIs, it needs admin level privileges. Which is why demoting this user into a non-admin in our production environment wasn’t immediately possible.
It’s a never-ending journey when it comes to learning the admin and governance tasks related to Power Platform. This is why we try to eat as much of our own dogfood as possible (well, dogfood coming from the Microsoft factory) and discover the problems & solutions before our customers run into them. As the next step we will establish a proper service account to run all the background processes where privileged accounts are required, like CoE Stater Kit data collection flows.
Just because you’ve deployed something as a trial/demo/PoC originally, doesn’t mean it couldn’t one day have dependencies to a production process with more critical business data. Many of the citizen developer driven initiatives may well start like this, with the creator’s user identity used for running a common process. I wouldn’t label it as a problem – rather it’s just a step to consider in your Power Platform governance model when those experimental processes need to graduate to a centrally managed solution for wider use across the organization.
Want to get started with Power Platform governance?
We have created the Power Platform Governance Starter Kit product to help you kickstart you low-code application platform journey with confidence. Tools, reports, analysis and guidance from the Forward Forever team of experts.
Dataverse environment administration mode and storage consumption - Jukka Niiranen2022-05-03 at 14:46
[…] backup directly over the production environment? There are many reasons which I’ve documented in the Forward Forever team blog. In short, if you’ve got any Power Apps canvas apps or Power Automate cloud flows in your […]
Renaud2022-11-14 at 17:51
Great article, thank you.
Do you have more info or documentation regarding what you did for ” historical data stored in Azure Data Lake”.
We are looking for ways to restore individual entities.
Jukka Niiranen2022-11-15 at 08:39
Renaud, we simply have the standard Azure Synapse Link feature enabled in our Dataverse environment. The instructions for setting it up can be found from this page on MS Learn: Create an Azure Synapse Link for Dataverse with Azure Data Lake.
Jay S2023-02-11 at 18:20
Thanks for sharing and tips.