Python + Dataverse Series – #07: Running a Linear Normalization Algorithm on Dataverse Data Using Python

This is continuation in this series of Dataverse SDK for Python, if you haven’t checked out earlier articles, I would encourage to start from the beginning of this series.

Machine learning often begins with one essential step: data preprocessing. Before models can learn patterns, the raw data must be cleaned, scaled, and transformed into a form suitable for analysis. In this example, let me demonstrate how to retrieve numerical data from Microsoft Dataverse and apply a linear normalization algorithm using Python.

Normalization is a fundamental algorithm in machine learning pipelines. It rescales numeric values into a consistent range—typically between 0 and 1—making them easier for algorithms to interpret and compare.

1. Retrieving Data from Dataverse

Using the DataverseClient and Interactive Browser authentication, we connect to Dataverse and fetch the revenue field from the Account table. This gives us a small dataset to run our algorithm on.

from azure.identity import InteractiveBrowserCredential
from PowerPlatform.Dataverse.client import DataverseClient
credential = InteractiveBrowserCredential()
client = DataverseClient("https://ecellorsdev.crm8.dynamics.com", credential)
account_batches = client.get(
"account",
select=["accountid", "revenue"],
top=10,
)

We then extract the revenue values into a NumPy array.

2. Implementing the Linear Normalization Algorithm

The algorithm used here is min–max normalization, defined mathematically as:normalized=xmin(x)max(x)min(x)This algorithm ensures:

  • the smallest value becomes 0
  • the largest becomes 1
  • all other values fall proportionally in between

Here’s the implementation:

import numpy as np
revenues = np.array(revenues)
min_rev = np.min(revenues)
max_rev = np.max(revenues)
normalized_revenues = (revenues - min_rev) / (max_rev - min_rev)

This is a classic preprocessing algorithm used in machine learning pipelines before feeding data into models such as regression, clustering, or neural networks.

3. Visualizing the Normalized Output

To better understand the effect of the algorithm, we plot the normalized values:

import matplotlib.pyplot as plt
plt.plot(normalized_revenues, marker='o')
plt.title('Normalized Revenues from Dataverse Accounts')
plt.xlabel('Account Index')
plt.ylabel('Normalized Revenue')
plt.grid()
plt.show()

The visualization highlights how the algorithm compresses the original revenue values into a uniform scale.

4. Why Normalization Matters

Normalization is not just a mathematical trick—it’s a crucial algorithmic step that:

  • prevents large values from dominating smaller ones
  • improves convergence in optimization-based models
  • enhances the stability of distance‑based algorithms
  • makes datasets comparable across different ranges
#Running Machine Learning Algorithm on data retrieved from Dataverse to run a linear normalization
from azure.identity import InteractiveBrowserCredential
from PowerPlatform.Dataverse.client import DataverseClient
import numpy as np
# Connect to Dataverse
credential = InteractiveBrowserCredential()
client = DataverseClient("https://ecellorsdev.crm8.dynamics.com", credential)
# Fetch account data as paged batches
account_batches = client.get(
"account",
select=["accountid", "revenue"],
top=10,
)
revenues = []
for batch in account_batches:
for account in batch:
if "revenue" in account and account["revenue"] is not None:
revenues.append(account["revenue"])
revenues = np.array(revenues)
# Apply a simple linear algorithm: Normalize the revenues
if len(revenues) > 0:
min_rev = np.min(revenues)
max_rev = np.max(revenues)
normalized_revenues = (revenues – min_rev) / (max_rev – min_rev)
print("Normalized Revenues:", normalized_revenues)
#visualize the result
import matplotlib.pyplot as plt
plt.plot(normalized_revenues, marker='o')
plt.title('Normalized Revenues from Dataverse Accounts')
plt.xlabel('Account Index')
plt.ylabel('Normalized Revenue')
plt.grid()
plt.show()

The use of this code is to transform raw Dataverse revenue data into normalized, machine‑learning‑ready values that can be analyzed, compared, and visualized effectively.

You can download the Python Notebook below if you want to work with VS Code

https://github.com/pavanmanideep/DataverseSDK_PythonSamples/blob/main/Python-RetrieveData-ApplyLinearAlgorithm.ipynb

Once you have opened the Python notebook, you can start to run the code as below

You should see something like below

For authentication in another browser tab, once authenticated, you should be able to see the

Hope you found this useful…it’s going to be interesting, stay tuned for upcoming articles.

Cheers,

PMDY

Python + Dataverse Series – #06: Data preprocessing steps before running Machine Learning Algorithms

Hi Folks,

If you were already a Power Platform Consultant and new to working with Python, then I would encourage to start from the beginning of this series.

Now in this series, we entered an interesting part where Machine learning algorithms were run to analyze Dataverse Data and in this post we will understand why feature scaling is a critical preprocessing step for many machine learning algorithms because it ensures that all features contribute equally to the model’s outcome, prevents numerical instability, and helps optimization algorithms converge faster to the optimal solution

Primarily before running any Machine Learning Algorithm, we need to do some data preprocessing like scaling the data, in this case we will use a formula which is used to scale using min–max normalization (feature scaling to the [0, 1] range).

#preprocessing step before running machine learning algorithms
from azure.identity import InteractiveBrowserCredential #using Interactive Login
from PowerPlatform.Dataverse.client import DataverseClient #installing Python SDK for Dataverse
import numpy as np #import Numpy Library to perform calculations
# Connect to Dataverse
credential = InteractiveBrowserCredential()
client = DataverseClient("https://ecellorsdev.crm8.dynamics.com", credential) #Creates Dataverse Client
# Fetch account data as paged batches
account_batches = client.get(
"account",
select=["accountid", "revenue"],
top=10,
) #Fetches top 10 accounts with accountid, revenue columns
revenues = []
for batch in account_batches:
for account in batch:
if "revenue" in account and account["revenue"] is not None:
revenues.append(account["revenue"])
revenues = np.array(revenues)
#Normalize the revenue
if len(revenues) > 0:
min_rev = np.min(revenues)
max_rev = np.max(revenues)
normalized_revenues = (revenues – min_rev) / (max_rev – min_rev)
print("Normalized Revenues:", normalized_revenues)
#visualize the result
import matplotlib.pyplot as plt
plt.plot(normalized_revenues, marker='o')
plt.title('Normalized Revenues from Dataverse Accounts')
plt.xlabel('Account Index')
plt.ylabel('Normalized Revenue')
plt.grid()
plt.show()

You can download the Python Notebook below if you want to work with VS Code

https://github.com/pavanmanideep/DataverseSDK_PythonSamples/blob/main/Python-PreProcessingStepBeforeMachineLearning.ipynb

Hope you found this useful…

Cheers,

PMDY

Python + Dataverse Series – How to run Python Code in Vs Code

Hi Folks,

As you folks know that Python currently is the number #1 programming language with a massive, versatile ecosystem of libraries for data science, AI, and backend web development. This post kicks off a hands‑on series about working with Microsoft Dataverse using Python. We’ll explore how to use the Dataverse SDK for Python to connect with Dataverse, automate data operations, and integrate Python solutions across the broader Power Platform ecosystem. Whether you’re building data-driven apps, automating workflows, or extending Power Platform capabilities with custom logic, this series will help you get started with practical, real‑world examples.

https://www.microsoft.com/en-us/power-platform/blog/2025/12/03/dataverse-sdk-python/

With the release of the Dataverse SDK for Python, building Python-based logic for the Power Platform has become dramatically simpler. In this post, we’ll walk through how to download Python and set it up in Visual Studio Code so you can start building applications that interact with Dataverse using Python. Sounds exciting already. Let’s dive in and get everything set up..

1. Download and install Python from official website below and then install it in your computer.

https://www.python.org/ftp/python/3.14.3/python-3.14.3-amd64.exe

2. Install VS Code

Important: During installation, make sure to check “Add Python to PATH”. This ensures VS Code can detect Python automatically.

3. After installation, open VS Code and install the Python extension (Microsoft’s official one). This extension enables IntelliSense, debugging, and running Python script

4. That’s it, you are now able to run Python logic inside Vs Code

5. Create or Open a Python file in the system, opened a sample file below

5. If you want to run Python Programmes in your VS Code, follow below options

a. Select Start Debugging

b. You will be prompted a window like below

You can select the first option highlighted above, it automatically runs your Python Code

This is very easy to setup…

If you want to continue reading this series, check out the next article.

Hope this helps…

Cheers,

PMDY

Python + Dataverse Series – #05: Remove PII

Hi Folks,

This is in continuation in the Python + Dataverse series, it is worth checking out from the start of this series here.

At times, there will be a need to remove PII(Personally Identifiable Information) present in the Dataverse Environments, for this one time task, you can easily run Python script below, let’s take example of removing PII from Contact fields in the below example.

from azure.identity import InteractiveBrowserCredential
from PowerPlatform.Dataverse.client import DataverseClient
# Connect to Dataverse
credential = InteractiveBrowserCredential()
client = DataverseClient("https://ecellorsdev.crm8.dynamics.com", credential)
#use AI to remove PII data from the dataverse records, let's say contact records
def remove_pii_from_contact(contact):
pii_fields = ['emailaddress1', 'telephone1', 'mobilephone', 'address1_line1', 'address1_city', 'address1_postalcode']
for field in pii_fields:
if field in contact:
contact[field] = '[REDACTED]'
return contact
# Fetch contacts with PII (Dataverse client returns paged batches)
contact_batches = client.get(
"contact",
select=[
"contactid",
"fullname",
"emailaddress1",
"telephone1",
"mobilephone",
"address1_line1",
"address1_city",
"address1_postalcode",
],
top=10,
)
# Remove PII and update contacts
for batch in contact_batches:
for contact in batch:
contact_id = contact.get("contactid")
sanitized_contact = remove_pii_from_contact(contact)
# Prepare update data (exclude contactid)
update_data = {key: value for key, value in sanitized_contact.items() if key != "contactid"}
# Update the contact in Dataverse
client.update("contact", contact_id, update_data)
print(f"Contact {contact_id} updated with sanitized data: {sanitized_contact}")

If you want to work on this, download the Python Notebook to use in VS Code…

https://github.com/pavanmanideep/DataverseSDK_PythonSamples/blob/main/Python-DataverseSDK-RemovePII.ipynb

Cheers,

PMDY

Understanding MIME Types in Power Platform – Quick Review

While many people doesn’t know the significance of MIME Type, this post is to give brief knowledge about the same before moving to understand security concepts in Power Platform in my upcoming articles.

In the Microsoft Power Platform, MIME types (Multipurpose Internet Mail Extensions) are standardized labels used to identify the format of data files. They are critical for ensuring that applications like Power Apps, Power Automate, and Power Pages can correctly process, display, or transmit files. 

Core Functions in Power Platform

  • Dataverse Storage: Tables such as ActivityMimeAttachment and Annotation (Notes) use a dedicated MimeType column to store the format of attached files alongside their Base64-encoded content.
  • Security & Governance: Administrators can use the Power Platform Admin Center to block specific “dangerous” MIME types (e.g., executables) from being uploaded as attachments to protect the environment.
  • Power Automate Approvals: You can configure approval flows to fail if they contain blocked file types, providing an extra layer of security for email notifications.
  • Power Pages (Web Templates): When creating custom web templates, the MIME type field controls how the server responds to a browser. For example, templates generating JSON must be set to application/json to be parsed correctly.
  • Email Operations: When using connectors like Office 365 Outlook, you must specify the MIME type for attachments (e.g., application/pdf for PDFs) so the recipient’s client can open them properly. 

Common MIME Types Used

File Extension MIME Type
.pdfapplication/pdf
.docxapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
.xlsxapplication/vnd.openxmlformats-officedocument.spreadsheetml.sheet
.png / .jpgimage/png / image/jpeg
.jsonapplication/json
Unknownapplication/octet-stream (used for generic binary files)

Implementing MIME type handling and file restrictions ensures your Power Platform solutions are both functional and secure.

1. Programmatically Setting MIME Types in Power Automate 

When working with file content in Power Automate, you often need to define the MIME type within a JSON object so connectors (like Outlook or HTTP) understand how to process the data. 

  • Structure: Use a Compose action to build a file object with the $content-type (MIME type) and $content (Base64 data).json
  • Dynamic Mapping: If you don’t know the file type in advance, you can use an expression to map extensions to MIME types or use connectors like Cloudmersive to automatically detect document type information. 

2. Restricting File Types in Power Apps

The Attachment control in Power Apps does not have a built-in “allowed types” property, so you must use Power Fx formulas to validate files after they are added. 

  • Validation on Add: Use the OnAddFile property of the attachment control to check the extension and notify the user if it’s invalid in PowerApps
  • Submit Button Logic: For added security, set the DisplayMode of your Submit button to Disabled if any attachment in the list doesn’t match your criteria. 

3. Global Restrictions (Admin Center)

To enforce security across the entire environment, administrators can navigate to the Power Platform Admin Center to manage blocked MIME types. Adding an extension to the blocked file extensions list prevents users from uploading those file types to Dataverse tables like Notes or email attachments. 

Hope this helps…in next post, I will be talking about Content Security Policy and how Power Platform can be secured using different sets of configuration.

Cheers,

PMDY

Agentic AI Business Solutions Architect Exam -AB 100 Experience

Hi Folks,

On 14 November, 2025, I took the AB 100 Exam, this post is to share my experience about this exam.

The exam doesn’t look difficult or tricky to me, it feels like a lot to read in short amount of time. Most of the questions revolved around Copilot Studio, Azure AI Foundry, Azure Services for tracking telemetry, Copilot, Dynamics 365 Customer Engagement, Finance and Operations, Supply Chain Management.

While there is nitty-gritty on using Prebuilt agents and Custom Agents using Azure AI Foundry and Agent Governance, choosing right agent for the need but note that no question came up from AI Builder, Licensing as well.

However there were also scenario based questions on strategy using right tool for the need to build agent e.x. Azure Cloud Adoption Framework, Power Platform Well Architected Framework.

As per Exam NDA, exact exam questions may not be shared publicly, I am sharing my experience so that someone preparing for this exam can use this while preparing for taking this exam.

If you want to learn further, you can go through the below link which was recently created by Microsoft….go take a look…

AB 100 Collection

Hope it helps..

Cheers,

PMDY

Python + Dataverse Series – #04: Create records in batch using Execute Multiple

Hi Folks,

This is continuation in this Python with Dataverse Series, in this blog post, we will see how can we create multiple records in a single batch using ExecuteMultiple in Python.

Please use the below code for the same…to make any calls using ExecuteMultiple…

import pyodbc
import msal
import requests
import json
import re
import time
# Azure AD details
client_id = '0e1c58b1-3d9a-4618-8889-6c6505288d3c'
client_secret = 'qlU8Q~dmhKFfdL1ph2YsLK9URbhIPn~qWmfr1ceL'
tenant_id = '97ae7e35-2f87-418b-9432-6733950f3d5c'
authority = f'https://login.microsoftonline.com/{tenant_id}'
resource = 'https://ecellorsdev.crm8.dynamics.com'
# SQL endpoint
sql_server = 'ecellorsdev.crm8.dynamics.com'
database = 'ecellorsdev'
# Get token with error handling
try:
print(f"Attempting to authenticate with tenant: {tenant_id}")
print(f"Authority URL: {authority}")
app = msal.ConfidentialClientApplication(client_id, authority=authority, client_credential=client_secret)
print("Acquiring token…")
token_response = app.acquire_token_for_client(scopes=[f'{resource}/.default'])
if 'error' in token_response:
print(f"Token acquisition failed: {token_response['error']}")
print(f"Error description: {token_response.get('error_description', 'No description available')}")
else:
access_token = token_response['access_token']
print("Token acquired successfully and your token is!"+access_token)
print(f"Token length: {len(access_token)} characters")
except ValueError as e:
print(f"Configuration Error: {e}")
except Exception as e:
print(f"Unexpected error: {e}")
#Get 5 contacts from Dataverse using Web API
import requests
import json
try:
#Full CRUD Operations – Create, Read, Update, Delete a contact in Dataverse
print("Making Web API request to perform CRUD operations on contacts…")
# Dataverse Web API endpoint for contacts
web_api_url = f"{resource}/api/data/v9.2/contacts"
# Base headers with authorization token
headers = {
'Authorization': f'Bearer {access_token}',
'OData-MaxVersion': '4.0',
'OData-Version': '4.0',
'Accept': 'application/json',
'Content-Type': 'application/json'
}
# Simple approach: create multiple contacts sequentially
# generate 100 contacts with different last names
contacts_to_create = [
{"firstname": "Ecellors", "lastname": f"Test{str(i).zfill(3)}"}
for i in range(1, 101)
]
create_headers = headers.copy()
create_headers['Prefer'] = 'return=representation'
created_ids = []
print("Creating contacts sequentially…")
for i, body in enumerate(contacts_to_create, start=1):
try:
resp = requests.post(web_api_url, headers=create_headers, json=body, timeout=15)
except requests.exceptions.RequestException as e:
print(f"Request error creating contact #{i}: {e}")
continue
if resp.status_code in (200, 201):
try:
j = resp.json()
cid = j.get('contactid')
except ValueError:
cid = None
if cid:
created_ids.append(cid)
print(f"Created contact #{i} with id: {cid}")
else:
print(f"Created contact #{i} but response body missing id. Response headers: {resp.headers}")
elif resp.status_code == 204:
# try to extract id from headers
entity_url = resp.headers.get('OData-EntityId') or resp.headers.get('Location')
if entity_url:
m = re.search(r"([0-9a-fA-F\-]{36})", entity_url)
if m:
cid = m.group(1)
created_ids.append(cid)
print(f"Created contact #{i} (204) with id: {cid}")
else:
print(f"Created contact #{i} (204) but couldn't parse id from headers: {resp.headers}")
else:
print(f"Created contact #{i} (204) but no entity header present: {resp.headers}")
else:
print(f"Failed to create contact #{i}. Status code: {resp.status_code}, Response: {resp.text}")
# small pause to reduce chance of throttling/rate limits
time.sleep(0.2)
if created_ids:
print("Created contact ids:")
for cid in created_ids:
print(cid)
except Exception as e:
print(f"Unexpected error during Execute Multiple: {e}")
print("Failed to extract Contact ID from headers.")

Please download this Jupyter notebook to work on it easily using VS Code.

https://github.com/pavanmanideep/DataverseSDK_PythonSamples/blob/main/Python-Dataverse-ExecuteMultipleUsingPython.ipynb

If you want to continue reading this series, follow along

Hope this helps..

Cheers,

PMDY

Python + Dataverse Series – Post #03: Create, Update, Delete records via Web API

Hi Folks,

This is continuation in this Python with Dataverse Series, in this blog post, we will perform a full CRUD(Create, Retrieve, Update, Delete) in Dataverse using Web API.

Please use the below code for the same…to make any calls using WEB API to Dataverse.

import pyodbc
import msal
import requests
import json
import re
# Azure AD details
client_id = 'XXXX'
client_secret = 'XXXX'
tenant_id = 'XXXX'
authority = f'https://login.microsoftonline.com/{tenant_id}'
resource = 'https://XXXX.crm8.dynamics.com'
# SQL endpoint
sql_server = 'XXXX.crm8.dynamics.com'
database = 'XXXX'
# Get token with error handling
try:
print(f"Attempting to authenticate with tenant: {tenant_id}")
print(f"Authority URL: {authority}")
app = msal.ConfidentialClientApplication(client_id, authority=authority, client_credential=client_secret)
print("Acquiring token…")
token_response = app.acquire_token_for_client(scopes=[f'{resource}/.default'])
if 'error' in token_response:
print(f"Token acquisition failed: {token_response['error']}")
print(f"Error description: {token_response.get('error_description', 'No description available')}")
else:
access_token = token_response['access_token']
print("Token acquired successfully and your token is!"+access_token)
print(f"Token length: {len(access_token)} characters")
except ValueError as e:
print(f"Configuration Error: {e}")
print("\nPossible solutions:")
print("1. Verify your tenant ID is correct")
print("2. Check if the tenant exists and is active")
print("3. Ensure you're using the right Azure cloud (commercial, government, etc.)")
except Exception as e:
print(f"Unexpected error: {e}")
#Get 5 contacts from Dataverse using Web API
import requests
import json
try:
#Full CRUD Operations – Create, Read, Update, Delete a contact in Dataverse
print("Making Web API request to perform CRUD operations on contacts…")
# Dataverse Web API endpoint for contacts
web_api_url = f"{resource}/api/data/v9.2/contacts"
# Base headers with authorization token
headers = {
'Authorization': f'Bearer {access_token}',
'OData-MaxVersion': '4.0',
'OData-Version': '4.0',
'Accept': 'application/json',
'Content-Type': 'application/json'
}
# Create a new contact
new_contact = { "firstname": "John", "lastname": "Doe" }
print("Creating a new contact…")
# Request the server to return the created representation. If not supported or omitted,
# Dataverse often returns 204 No Content and provides the entity id in a response header.
create_headers = headers.copy()
create_headers['Prefer'] = 'return=representation'
response = requests.post(web_api_url, headers=create_headers, json=new_contact)
created_contact = {}
contact_id = None
# If the API returned the representation, parse the JSON
if response.status_code in (200, 201):
try:
created_contact = response.json()
except ValueError:
created_contact = {}
contact_id = created_contact.get('contactid') or created_contact.get('contactid@odata.bind')
print("New contact created successfully (body returned).")
print(f"Created Contact ID: {contact_id}")
# If the API returned 204 No Content, Dataverse includes the entity URL in 'OData-EntityId' or 'Location'
elif response.status_code == 204:
entity_url = response.headers.get('OData-EntityId') or response.headers.get('Location')
if entity_url:
# Extract GUID using regex (GUID format)
m = re.search(r"([0-9a-fA-F]{8}-[0-9a-fA-F]{4}-[0-9a-fA-F]{4}-[0-9a-fA-F]{4}-[0-9a-fA-F]{12})", entity_url)
if m:
contact_id = m.group(1)
created_contact = {'contactid': contact_id}
print("New contact created successfully (no body). Extracted Contact ID from headers:")
print(f"Created Contact ID: {contact_id}")
else:
print("Created but couldn't parse entity id from response headers:")
print(f"Headers: {response.headers}")
else:
print("Created but no entity location header found. Headers:")
print(response.headers)
else:
print(f"Failed to create contact. Status code: {response.status_code}")
print(f"Error details: {response.text}")
# Read the created contact
if not contact_id:
# Defensive: stop further CRUD if we don't have an id
print("No contact id available; aborting read/update/delete steps.")
else:
print("Reading the created contact…")
response = requests.get(f"{web_api_url}({contact_id})", headers=headers)
if response.status_code == 200:
print("Contact retrieved successfully!")
contact_data = response.json()
print(json.dumps(contact_data, indent=4))
else:
print(f"Failed to retrieve contact. Status code: {response.status_code}")
print(f"Error details: {response.text}")
# Update the contact's email
updated_data = { "emailaddress1": "john.doe@example.com" }
response = requests.patch(f"{web_api_url}({contact_id})", headers=headers, json=updated_data)
if response.status_code == 204:
print("Contact updated successfully!")
else:
print(f"Failed to update contact. Status code: {response.status_code}")
print(f"Error details: {response.text}")
# Delete the contact
response = requests.delete(f"{web_api_url}({contact_id})", headers=headers)
if response.status_code == 204:
print("Contact deleted successfully!")
else:
print(f"Failed to delete contact. Status code: {response.status_code}")
print(f"Error details: {response.text}")
except requests.exceptions.RequestException as e:
print(f"Request error: {e}")
except KeyError as e:
print(f"Token not available: {e}")
except Exception as e:
print(f"Unexpected error: {e}")
view raw FullCRUDWebAPI hosted with ❤ by GitHub

You can use the VS Code as IDE, copy the above code in a python file, next click on Run Python File at the top of the VS Code

Hope this helps someone making Web API Calls using Python.

If you want to try this out, download the Python Notebook and open in VS Code.

https://github.com/pavanmanideep/DataverseSDK_PythonSamples/blob/main/Python-Dataverse-FullCRUD-03.ipynb

Looking to continue following this series, don’t forget the next article in this series

Cheers,

PMDY

Integrating Copilot Studio Agents into a Console got easier – Quick Review

Well, the wait is over, now you can invoke your Agents within Console Application using #Agents SDK.

Thank you Rajeev for writing this. Reblogging it…

🔄Power Platform Environment Restore: Options & Best Practices

Hi Folks,

Restoring environments in Power Platform has evolved significantly.

In the past, Dynamics CRM On-Premise users relied on SQL database backups and manual restores. Today, administrators can perform environment restores in online instances with just a few clicks via the Power Platform Admin Center.

This guide outlines the available restore options and key considerations to ensure a smooth and secure process.

🛠️ Restore Options in Power Platform

OptionDescription
1. Manual Backup RestoreRestore from a backup you manually created. Ideal before major customizations or version updates.
2. System Backup RestoreUse automated system backups created by Microsoft. Convenient but less flexible than manual backups.
3. Full CopyClone the entire environment, including data, customizations, and configurations. Suitable for staging or testing.
4. Partial Copy (Customizations & Schema Only)Copies only solutions and schema—no data. Best for promoting configurations from Production to SIT/UAT.

✅ Best Practices & Key Considerations

  • Use Partial Copy for Non-Production: When restoring from Production to SIT/UAT, prefer Partial Copy to avoid data and configuration mismatches. This brings all solutions without the underlying data.
  • Use Full Copy: In case it is restoring to a same type of environment
  • Avoid Restoring Production Backups to Non-Prod: Manual or system backups from Production should not be restored to non-production environments. This often leads to configuration conflicts and user access issues.
  • Update Security Groups: Always update the Security Group when restoring or copying to a different environment type. Otherwise, users may be unable to log in due to mismatched access controls.
  • Manual Backup Timing: After creating a manual backup, wait 10–15 minutes before initiating a restore. This ensures the backup is fully processed and available.
  • Regional Restore Limitation: You can only restore an environment to another environment within the same region.
  • Unlimited Manual Backups: There’s no cap on the number of manual backups you can create—use this flexibility to safeguard key milestones.
  • Exclude Audit Logs When Possible: Including Audit Logs in copies or restores can significantly increase processing time. Exclude them unless absolutely necessary.

🧠 Technical Note

All backup and restore operations in Power Platform are powered by SQL-based technology under the hood, ensuring consistency and reliability across environments.

Reference:

https://learn.microsoft.com/en-us/power-platform/admin/backup-restore-environments?tabs=new

Cheers,

PMDY