Dataverse or SQL Server? And Where Does PostgreSQL Fit…for Power Apps Implementation?

Hi Folks,

Hope you’re all doing great and staying safe. This week, let’s dive into a question almost every architect, developer, and Power Platform enthusiast eventually faces when choosing a backend for the implementation—but rarely gets a clear answer to:

“Should I use Dataverse, SQL Server, or PostgreSQL for my next Power Apps solution…?”

With so many platforms claiming speed, scalability, flexibility, and low‑code magic, choosing the right one can feel like navigating a maze. But don’t worry—I’ve broken it all down into a simple, structured guide that can help you guide your implementation choice.

Whether you’re building Power Apps, designing enterprise systems, or architecting cloud‑native solutions, this comparison will help you. You will understand what each platform really offers. You will see how they differ. And—most importantly—which one fits your scenario best.

Let’s jump in and make your next data decision a confident one.

“Firstly, why think of PostgreSQL, in a Microsoft ecosystem

While Power Platform implementations rarely use PostgreSQL directly, it remains one of the most widely adopted enterprise databases. Including it in this comparison helps architects understand how Microsoft’s data platforms stack up against a major industry standard.

High‑Level Summary

Dataverse is a managed, low‑code data platform built for the Microsoft Power Platform. SQL Server is a commercial, enterprise-grade relational database tightly integrated with the Microsoft ecosystem. PostgreSQL is an open‑source, highly extensible relational database known for standards compliance and advanced features.

What Each One Is

🟦 Microsoft Dataverse

  • A cloud-based data platform used by Power Apps, Power Automate, Dynamics 365.
  • Not just a database—includes security, business rules, API layer, auditing, integration, and a managed schema.
  • Under the hood uses Azure SQL, Cosmos DB, and Azure Blob Storage.

🟥 Microsoft SQL Server

  • A full-featured relational database management system (RDBMS).
  • Commercial licensing, strong enterprise tooling, and deep integration with Azure, .NET, Windows Server.
  • Supports OLTP, analytics, and BI workloads.

🟩 PostgreSQL

  • A free, open-source RDBMS with strong SQL standards compliance.
  • Known for extensibility (custom types, functions, extensions like PostGIS).
  • Competes directly with SQL Server in enterprise features without licensing fees.

Comparison Table

Feature / AspectDataverseSQL ServerPostgreSQL
Primary PurposeLow‑code app data platformEnterprise RDBMSOpen‑source enterprise RDBMS
Best ForPower Platform & Dynamics appsEnterprise apps, BI, Microsoft stackCross‑platform apps, open-source ecosystems
HostingFully managed SaaSOn‑prem, Azure, hybridOn‑prem, cloud (AWS, Azure, GCP), hybrid
LicensingPer‑user/app licensingCommercial licensesFree (open source)
ExtensibilityLimited (managed schema)HighVery high (extensions, custom types)
APIsBuilt‑in REST, ODataRequires custom API layerRequires custom API layer
Security ModelRow-level, role-based, built-inHighly configurableHighly configurable
Performance ControlLimited (managed)Full controlFull control
Use in Power PlatformNativeRequires connectorsRequires connectors

Key Differences Explained

1. Purpose & Abstraction Level

  • Dataverse abstracts away database management. You don’t manage tables, indexes, or backups—Microsoft does.
  • SQL Server and PostgreSQL give you full control over schema, performance tuning, and infrastructure.

2. Integration

  • Dataverse is the default data layer for Power Apps and Dynamics 365.
  • SQL Server integrates deeply with Microsoft tools (SSIS, SSRS, Azure Synapse).
  • PostgreSQL integrates broadly across open-source ecosystems and cloud platforms.

3. Cost Model

  • Dataverse: Licensing based on Power Platform usage (can get expensive at scale).
  • SQL Server: Licensing per core or CAL.
  • PostgreSQL: Free, with optional paid support.

4. Flexibility

  • Dataverse: Highly opinionated; great for business apps but restrictive for custom architectures.
  • SQL Server: Flexible but within Microsoft’s ecosystem.
  • PostgreSQL: Most flexible—extensions, custom data types, procedural languages.

5. Scalability

  • Dataverse: Scales automatically but within platform limits.
  • SQL Server: Scales vertically and horizontally (with Always On, sharding patterns).
  • PostgreSQL: Scales well; many cloud providers offer managed scaling.

When to Use Each

Choose Dataverse if:

  • You’re building Power Apps, Power Automate, or Dynamics 365 solutions.
  • You want zero database administration.
  • You need built‑in security, auditing, business rules, and managed APIs.

Choose SQL Server if:

  • You’re in a Microsoft-centric enterprise.
  • You need high-performance OLTP, BI, or analytics.
  • You want tight integration with Azure and .NET.

Choose PostgreSQL if:

  • You want open-source, cost-effective, and highly extensible technology.
  • You need advanced SQL features or geospatial support (PostGIS).
  • You want cloud portability (AWS, Azure, GCP).

How to Decide Quickly

Check the below…ask yourself…the following questions…

  1. Are you building Power Platform apps? → Use Dataverse.
  2. Are you building enterprise apps in the Microsoft ecosystem? → Use SQL Server.
  3. Do you want open-source, flexible, and cloud-portable? → Use PostgreSQL.

References:

https://www.postgresql.org/

Cheers,

PMDY

Why Microsoft Support Asks for a HAR File …?

Hello Microsoft Folks,

Everyone in their career will reach a point. At this stage, the next step is to raise a Microsoft Support ticket to report any product issue — particularly for Power Apps implementations, as we did recently.

Microsoft Support generally asks to send a HAR File to escalate issues to the Product team. In this blog post today, let’s understand what a HAR File is and why the MS Product team needs it.

A HAR file (HTTP Archive) is a diagnostic capture of everything your browser does during a web session. It includes network calls, payloads, headers, and timings. It also encompasses redirects, failures, and more.

You raise a ticket for Power Apps, Power Automate, or Power BI Service. The product team often needs more than just screenshots. They need detailed information. They can’t reproduce the issue with just images. They need to see exactly what your browser saw.

What a HAR File Includes.. and why Microsoft Product team needs it??

  • Think of it as a flight recorder for your browser:
  • Network requests Every API call your browser makes
  • Failing endpoints or throttling
  • Ask/response headers Auth tokens, cookies, metadata To check authentication, region routing, tenant context
  • Payloads JSON bodies sent/received To see malformed data, schema mismatches, or server errors
  • Timings DNS, SSL, wait time, download time To diagnose latency, timeouts, or CDN issues
  • Errors 4xx/5xx responses To pinpoint backend failures

This is the only way the engineering team can see the real sequence of events that caused your issue.

 Why It’s Critical for Power Platform Issues

Especially in Power Platform, a HAR file helps diagnose:

• Connector calls failing due to throttling
• Canvas app load failures
• Dataverse API errors
• Authentication loops (AAD, MSAL, cookies)
• Portal/Power Pages rendering issues
• Power BI embedded or service-side failures
• Browser-specific regressions
• Region misrouting or CDN cache issues

You’ve probably seen cases where the UI shows a generic message like Something went wrong.

The HAR file reveals the actual error behind it.

 Is It Safe?

A HAR file can contain sensitive data (tokens, cookies, request bodies).
That’s why Microsoft always asks you to:

• Reproduce the issue in a test environment if possible
• Scrub sensitive fields if needed
• Upload via the secure support portal

Microsoft support uses it only for debugging and deletes it after the case is resolved.

 With a HAR file, MS Engineers can:

• Reproduce the issue in their internal environment
• Identify whether the problem is client-side, network-side, or server-side
• Trace the exact failing API
• Confirm whether it’s a regression, configuration issue, or tenant-specific problem
• Escalate to the product group with concrete evidence

Now you have understood the purpose of the HAR file, use the below link to generate the same

https://learn.microsoft.com/en-us/azure/azure-portal/capture-browser-trace

Cheers,

PMDY

Python + Dataverse Series – Post #03: Create, Update, Delete records via Web API

Hi Folks,

This is continuation in this Python with Dataverse Series, in this blog post, we will perform a full CRUD(Create, Retrieve, Update, Delete) in Dataverse using Web API.

Please use the below code for the same…to make any calls using WEB API to Dataverse.

import pyodbc
import msal
import requests
import json
import re
# Azure AD details
client_id = 'XXXX'
client_secret = 'XXXX'
tenant_id = 'XXXX'
authority = f'https://login.microsoftonline.com/{tenant_id}'
resource = 'https://XXXX.crm8.dynamics.com'
# SQL endpoint
sql_server = 'XXXX.crm8.dynamics.com'
database = 'XXXX'
# Get token with error handling
try:
print(f"Attempting to authenticate with tenant: {tenant_id}")
print(f"Authority URL: {authority}")
app = msal.ConfidentialClientApplication(client_id, authority=authority, client_credential=client_secret)
print("Acquiring token…")
token_response = app.acquire_token_for_client(scopes=[f'{resource}/.default'])
if 'error' in token_response:
print(f"Token acquisition failed: {token_response['error']}")
print(f"Error description: {token_response.get('error_description', 'No description available')}")
else:
access_token = token_response['access_token']
print("Token acquired successfully and your token is!"+access_token)
print(f"Token length: {len(access_token)} characters")
except ValueError as e:
print(f"Configuration Error: {e}")
print("\nPossible solutions:")
print("1. Verify your tenant ID is correct")
print("2. Check if the tenant exists and is active")
print("3. Ensure you're using the right Azure cloud (commercial, government, etc.)")
except Exception as e:
print(f"Unexpected error: {e}")
#Get 5 contacts from Dataverse using Web API
import requests
import json
try:
#Full CRUD Operations – Create, Read, Update, Delete a contact in Dataverse
print("Making Web API request to perform CRUD operations on contacts…")
# Dataverse Web API endpoint for contacts
web_api_url = f"{resource}/api/data/v9.2/contacts"
# Base headers with authorization token
headers = {
'Authorization': f'Bearer {access_token}',
'OData-MaxVersion': '4.0',
'OData-Version': '4.0',
'Accept': 'application/json',
'Content-Type': 'application/json'
}
# Create a new contact
new_contact = { "firstname": "John", "lastname": "Doe" }
print("Creating a new contact…")
# Request the server to return the created representation. If not supported or omitted,
# Dataverse often returns 204 No Content and provides the entity id in a response header.
create_headers = headers.copy()
create_headers['Prefer'] = 'return=representation'
response = requests.post(web_api_url, headers=create_headers, json=new_contact)
created_contact = {}
contact_id = None
# If the API returned the representation, parse the JSON
if response.status_code in (200, 201):
try:
created_contact = response.json()
except ValueError:
created_contact = {}
contact_id = created_contact.get('contactid') or created_contact.get('contactid@odata.bind')
print("New contact created successfully (body returned).")
print(f"Created Contact ID: {contact_id}")
# If the API returned 204 No Content, Dataverse includes the entity URL in 'OData-EntityId' or 'Location'
elif response.status_code == 204:
entity_url = response.headers.get('OData-EntityId') or response.headers.get('Location')
if entity_url:
# Extract GUID using regex (GUID format)
m = re.search(r"([0-9a-fA-F]{8}-[0-9a-fA-F]{4}-[0-9a-fA-F]{4}-[0-9a-fA-F]{4}-[0-9a-fA-F]{12})", entity_url)
if m:
contact_id = m.group(1)
created_contact = {'contactid': contact_id}
print("New contact created successfully (no body). Extracted Contact ID from headers:")
print(f"Created Contact ID: {contact_id}")
else:
print("Created but couldn't parse entity id from response headers:")
print(f"Headers: {response.headers}")
else:
print("Created but no entity location header found. Headers:")
print(response.headers)
else:
print(f"Failed to create contact. Status code: {response.status_code}")
print(f"Error details: {response.text}")
# Read the created contact
if not contact_id:
# Defensive: stop further CRUD if we don't have an id
print("No contact id available; aborting read/update/delete steps.")
else:
print("Reading the created contact…")
response = requests.get(f"{web_api_url}({contact_id})", headers=headers)
if response.status_code == 200:
print("Contact retrieved successfully!")
contact_data = response.json()
print(json.dumps(contact_data, indent=4))
else:
print(f"Failed to retrieve contact. Status code: {response.status_code}")
print(f"Error details: {response.text}")
# Update the contact's email
updated_data = { "emailaddress1": "john.doe@example.com" }
response = requests.patch(f"{web_api_url}({contact_id})", headers=headers, json=updated_data)
if response.status_code == 204:
print("Contact updated successfully!")
else:
print(f"Failed to update contact. Status code: {response.status_code}")
print(f"Error details: {response.text}")
# Delete the contact
response = requests.delete(f"{web_api_url}({contact_id})", headers=headers)
if response.status_code == 204:
print("Contact deleted successfully!")
else:
print(f"Failed to delete contact. Status code: {response.status_code}")
print(f"Error details: {response.text}")
except requests.exceptions.RequestException as e:
print(f"Request error: {e}")
except KeyError as e:
print(f"Token not available: {e}")
except Exception as e:
print(f"Unexpected error: {e}")
view raw FullCRUDWebAPI hosted with ❤ by GitHub

You can use the VS Code as IDE, copy the above code in a python file, next click on Run Python File at the top of the VS Code

Hope this helps someone making Web API Calls using Python.

If you want to try this out, download the Python Notebook and open in VS Code.

https://github.com/pavanmanideep/DataverseSDK_PythonSamples/blob/main/Python-Dataverse-FullCRUD-03.ipynb

Looking to continue following this series, don’t forget the next article in this series

Cheers,

PMDY

Integrating Copilot Studio Agents into a Console got easier – Quick Review

Well, the wait is over, now you can invoke your Agents within Console Application using #Agents SDK.

Thank you Rajeev for writing this. Reblogging it…

Building a Cloud-Native Power Apps ALM Pipeline with GitLab, Google Cloud Build and PAC CLI: Streamlining Solution Lifecycle Automation

A unique combination to achieve deployment automation of Power Platform Solutions

Hi Folks,

This post is about ALM in Power Platform integrating with a different ecosystem than usual, i.e. using Google Cloud, sounds interesting..? This approach is mainly intended for folks using Google Cloud or GitLab as part of their implementation.

Integrating Google Cloud Build with Power Platform for ALM (Application Lifecycle Management) using GitLab is feasible and beneficial. This integration combines GitLab as a unified DevOps platform with Google Cloud Build for executing CI/CD pipelines, enabling automated build, test, export, and deployment of Power Platform solutions efficiently. This was the core idea for my session on Friday 28 November, at New Zealand Business Applications Summit 2025.

Detailed Steps for this implementation

Create an access token in GitLab for API Access and Read Access

Click on Add new token, you can select at the minimum the below scopes while you were working with CI-CD using GitLab

Create a host connection for the repository in GitLab

Specify the personal access token created in the previous step

Link your repository

The created host connections in the previous step will be shown under Connec ctions drop down

Create Trigger in Google Cloud Build

Click on Create trigger above, provide a name, select a nearest region

Event:

For now, I am choosing Manual invocation for illustration

Specify where the name of the Repository where your YAML in GitLab resides

You can optionally specify the substitution variables which are nothing but parameters you can pass to your pipeline from Google Cloud Build Configuration

You can optionally give this for any approval and choose the service account tagged to your google account in the drop down.

Click on Save.

Next proceed to GitLab YAML

You can find the full code below

steps:
– id: "export_managed"
name: "mcr.microsoft.com/dotnet/sdk:9.0"
entrypoint: "bash"
args:
– "-c"
– |
echo "=== 🏁 Starting Export Process ==="
# ✅ Define solution name from substitution variable
SOLUTION_NAME="${_SOLUTION_NAME}"
# ✅ Install PAC CLI
mkdir -p "${_HOME}/.dotnet/tools"
dotnet tool install –global Microsoft.PowerApps.CLI.Tool –version 1.48.2 || true
# Add dotnet global tools dir to the shell PATH for this step/session (preserve existing PATH)
export PATH="$_PATH:${_HOME}/.dotnet/tools"
echo "=== 🔐 Authenticating to Power Platform Environment ==="
pac auth create –name "manual" –url "https://ecellorsdev.crm8.dynamics.com" –tenant "XXXXX-XXXX-XXXXX-XXXXXX-XXXXX" –applicationId "XXXXXXXXXXXXXX" –clientSecret "XXXXXXXXXXXXXXXX"
pac auth list
echo "=== 📦 Exporting Solution: ${_SOLUTION_NAME} ==="
pac solution export \
–name "${_SOLUTION_NAME}" \
–path "/tmp/${_SOLUTION_NAME}.zip" \
–managed true \
–environment "${_SOURCE_ENV_URL}"
echo "=== ✅ Solution exported to /tmp/${_SOLUTION_NAME}.zip ==="
echo "=== 🔐 Authenticating to Target Environment ==="
pac auth create \
–name "target" \
–url "https://org94bd5a39.crm.dynamics.com" \
–tenant "XXXXXXXXXXXXXXXXXXXXXXXX" \
–applicationId "XXXX-XXXXX-XXXXX-XXXXXX" \
–clientSecret "xxxxxxxxxxxxxxxxxxxx"
echo "=== 📥 Importing Solution to Target Environment ==="
pac solution import \
–path "/tmp/${_SOLUTION_NAME}.zip" \
–environment "${_TARGET_ENV_URL}" \
–activate-plugins \
–publish-changes
echo "=== 🎉 Solution imported successfully! ==="
options:
logging: CLOUD_LOGGING_ONLY
substitutions:
_SOLUTION_NAME: "PluginsForALM_GitLab"
_SOURCE_ENV_URL: "https://org.crm.dynamics.com"
_TARGET_ENV_URL: "https://org.crm.dynamics.com"
_TENANT_ID: "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
_CLIENT_ID: "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
_CLIENT_SECRET: "xxxxxxxxxxxxxxxxxxxxxxxxxxxx"
_SOLUTIONS_DIR: "/workspace/Plugins/08112025"
view raw GitLabDemo.yaml hosted with ❤ by GitHub

Solution from Source Environment

Now lets run the trigger which will export the solution from the source environment and import to the target environment….we have manual trigger, automatic trigger whenever there is an commit to the repo in GitLab etc., you may pick whatever suits your needs best.

Solution imported to the target environment using Google Cloud Build

The below table illustrates key differences between Google Cloud Build and Azure Devops….

AspectGoogle Cloud BuildAzure DevOps Build Pipelines
Pricing ModelPay-as-you-go with per-second billingPer-minute billing with tiered pricing
Cost OptimizationSustained use discounts, preemptible VMsReserved capacity and enterprise agreements
Build EnvironmentServerless, container-native, managed by Google CloudRequires self-hosted or paid hosted agents
Free TierAvailable with build minutes and creditsAvailable but more limited
Operational OverheadLow, no need to manage build agentsHigher, managing agents or paying for hosted agents
Ideal ForVariable, short, or containerized workloadsLarge Microsoft-centric organizations
Integration Cost ImpactTightly integrated with Google Cloud serverless infrastructureIntegrated with Microsoft ecosystem but may incur additional licensing costs

Conclusion:

PAC CLI is a powerful command-line tool that automates authentication, environment, and solution management within Power Platform ALM, enabling consistent and repeatable deployment workflows. It integrates smoothly with DevOps tools like GitLab and Google Cloud Build, helping teams scale ALM practices efficiently while maintaining control and visibility over Power Platform environments. Just note, my intention was showcase the power of PAC CLI with wider ecosystem, not only with Microsoft.

Cheers,

PMDY

Showing multiselect option set from Model Driven Apps in Power BI

Hi Folks,

Well, this post will show you how you can work with multi option sets from Dynamics 365 in Power BI. First of all, you need some basic understanding of Power BI Desktop to follow. However, I made it clear for people with little background to follow and relate to. I scanned through the internet, but I couldn’t find a similar post, hence I am blogging this if it might help someone. I have faced this issue and here is the solution, you don’t need to use XrmToolBox nor Postman nor complex Power Query as many out in internet would suggest.

So, follow with me along, if you were trying show the values in Multi OptionSet from Model Driven Apps in Power BI as below, then this post is absolutely for you.

Practically if we retrieve the value of Multi OptionSet field as shown in the above image. You get something like below in comma separated values.

Now based on use case and the requirement, we need to transform our data, i.e. Split the values into rows or columns using a delimiter, in this case, we use comma as delimiter. Here I am splitting into multiple rows as I need to show the contacts for different option values selected in the record.

Select on the respective field and choose Split Column option available in the ribbon.

Next, you will be presented with Split Column Delimiter Dialog box, you may select the options as below and click on Ok.

Next in the Split Column by Delimiter, choose as below.

Once clicked on Ok, now the Multi OptionSet was changed to Single OptionSet and showing the values in different rows.

We can use Dataverse REST API to get the OptionSet values as below in Power BI, click on Get Data –> Web, enter the below in the URL to get the MultiSelect OptionSet Values –> Load. You can refer here some reference.

https://ecellorshost.crm5.dynamics.com/api/data/v9.2/stringmaps?$filter=attributename%20eq%20%27powerbi_multioptionset%27

Once data is loaded, it should look as below..

So, now click on Close and Apply the transformation to be saved in the model, later create the data model relationships by going to the model view as below between the multiselect OptionSet field in the contact table and string map table.

Once the relationship is established, we can proceed with plotting the visuals in visuals of your choice. For simplicity, used.

Hope this helps someone looking out for such requirement which at least could save couple of seconds.

Cheers,

PMDY

Enhancing Dataverse Plugins with Bulk Message Operations

Well, this could be a very interesting post as we talk about optimizing the Dataverse performance using bulk operation messages and too using Dataverse plugin customizations but wait, this post is not complete because of an issue which I will talk later in the blog. First let’s dig into this feature by actually trying out. Generally, every business wants improved performance for any logic tagged out to out of box messages and so developers try to optimize their code in various ways when using Dataverse messages.

Firstly, before diving deeper into this article, let’s first understand the differences between Standard and Elastic tables, if you want to know a bit of introduction to elastic tables which were newly introduced last year, you can refer to my previous post on elastic tables here.

The type of table you choose to store your data has the greatest impact on how much throughput you can expect with bulk operations. You can choose out of two types of tables in Dataverse, below are some key differences you can refer to: 

 Standard TablesElastic Tables
Data StructureDefined SchemaFlexible Schema
Stores data in Azure SQLStores data in Azure Cosmos DB
Data IntegrityEnsuredLess Strict
Relationship modelSupportedLimited
PerformancePredictableVariable, preferred for unpredictable and spiky workloads
AgilityLimitedHigh
PersonalizationLimitedExtensive
Standard and Elastic Table Differences

Plugins:

With Bulk Operation messages, the APIs being introduced are Create MultipleUpdateMultiple,DeleteMultiple (only for Elastic tables), Upsert Request(preview). As of now you’re not required to migrate your plug-ins to use CreateMultiple and Update Multiple instead of Create and Update messages. Your logic for Create and Update continues to be applied when applications use CreateMultiple or UpdateMultiple

This is mainly done to prevent two separate business logics for short running and long duration activities. So, it means Microsoft have merged the message processing pipelines for these messages (Create, Create Multiple; Update, Update Multiple) that means Create, Update messages continue to trigger for your existing implemented scenarios, when you update to use Create Multiple, Update Multiple still the Create, Update will behave.

Few points for consideration:

  1. While I have tested and still could see IPluginExecutionContext only provides the information and still I have noted Microsoft Documentation suggests using IPluginExecutionContext4 for Bulk Messages in Plugins where it is being shown as null yet.
  2. While you were working with Create, Update, Delete, you could have used Target property to get the input parameters collection, while working with Bulk Operation messages, you need to use Targets instead of Target.
  3. Instead of checking whether the target is Entity you need to use Entity Collection, we need to loop through and perform our desired business logic
  4. Coming to Images in plugin, these will be retrieved only when you have used IPluginExecutionContext4.

Below is the image from Plugin Registration Tool to refer(e.g. I have taken UpdateMultiple as reference, you can utilize any of the bulk operation messages)

Sample:

Below is the sample, how your Bulk operation message plugin can look like…you don’t need to use all the contexts, I have used to just check that out.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using Microsoft.Crm.Sdk;
using Microsoft.Xrm.Sdk;
using Microsoft.Xrm.Sdk.Query;
namespace Plugin_Sample
{
public class BulkMessagePlugin : IPlugin
{
public void Execute(IServiceProvider serviceProvider)
{
IPluginExecutionContext context = (IPluginExecutionContext)serviceProvider.GetService(typeof(IPluginExecutionContext));
IPluginExecutionContext2 context2 = (IPluginExecutionContext2)serviceProvider.GetService(typeof(IPluginExecutionContext2));
IPluginExecutionContext3 context3 = (IPluginExecutionContext4)serviceProvider.GetService(typeof(IPluginExecutionContext3));
IPluginExecutionContext4 context4 = (IPluginExecutionContext4)serviceProvider.GetService(typeof(IPluginExecutionContext4));
ITracingService trace = (ITracingService)serviceProvider.GetService(typeof(ITracingService));
// Verify input parameters
if (context4.InputParameters.Contains("Targets") && context.InputParameters["Targets"] is EntityCollection entityCollection)
{
// Verify expected entity images from step registration
if (context4.PreEntityImagesCollection.Length == entityCollection.Entities.Count)
{
int count = 0;
foreach (Entity entity in entityCollection.Entities)
{
EntityImageCollection entityImages = context4.PreEntityImagesCollection[count];
// Verify expected entity image from step registration
if (entityImages.TryGetValue("preimage", out Entity preImage))
{
bool entityContainsSampleName = entity.Contains("fieldname");
bool entityImageContainsSampleName = preImage.Contains("fieldname");
if (entityContainsSampleName && entityImageContainsSampleName)
{
// Verify that the entity 'sample_name' values are different
if (entity["fieldname"] != preImage["fieldname"])
{
string newName = (string)entity["fieldname"];
string oldName = (string)preImage["fieldname"];
string message = $"\\r\\n – 'sample_name' changed from '{oldName}' to '{newName}'.";
// If the 'sample_description' is included in the update, do not overwrite it, just append to it.
if (entity.Contains("sample_description"))
{
entity["sample_description"] = entity["sample_description"] += message;
}
else // The sample description is not included in the update, overwrite with current value + addition.
{
entity["sample_description"] = preImage["sample_description"] += message;
}
}
}
}
}
}
}
}
}
}

I have posted this question to Microsoft regarding the same to know more details on this why the IPluginExecutionContext4 is null , while still I am not sure if this is not deployed to my region, my environment is in India.

Recommendations for Plugins:

  • Don’t try to introduce CreateMultiple, UpdateMultiple, UpsertMultiple in a separate step as it would trigger the logic to be fired twice one for Create operation and another for CreateMultiple.
  • Don’t use batch request types such as ExecuteMultipleRequest, ExecuteTransactionRequest, CreateMultipleRequest, UpdateMultipleRequest, UpsertMultipleRequest in Plugins as user experiences are degraded and timeout errors can occur.
  • Instead use Bulk operation messages like CreateMultipleRequestUpdateMultipleRequest, UpsertMultipleRequest
  • No need to use ExecuteTransactionRequest in Synchronous Plugins as already they will be executed in the transaction.

Hope this guidance will help someone trying to customize their Power Platform solutions using Plugins.

I will write another blog post on using Bulk operation messages for Client Applications…

Cheers,

PMDY

Another way to install Plugin Registration Tool for Power Apps Developers from Nuget

Hi Folks,

Are you a Power Platform or Dynamics 365 CE Developer, you would definitely need to work on Plugin Registration tool at any given point of time and having a local application for Plugin Registration tool greatly helps…in this post, I will show a little different way to install Plugin registration tool and that too very easily.

Well, this approach is especially useful to me when I got a new laptop and need to work on Plugin Registration Tool where the Plugins already build for the implementation.

First 3 ways might have known to everyone through which you can download Plugin registration tool…do you know there is fourth approach as well…

  1. From XrmToolBox
  2. From https://xrm.tools/SDK
  3. Installation from CLI
  4. See below

Because there were limitations to use these approaches at least in my experience, I found the fourth one very useful.

  1. XrmToolBox – Not quite convenient to profile and debug your plugins
  2. https://xrm.tools/SDK – Dlls in the downloaded folder will be blocked and would need to manually unblock the DLL’s for the Tool to work properly
  3. CLI – People rarely use this.

Just do note that the approach is very easy and works only if you have a Plugin Project already. Please follow the steps below

  1. Just open the Plugin project.
  2. Right click on the solution and choose manage Nuget Packages for the solution
  3. Search for Plugin Registration tool as below

4. Choose the Plugin project and click install, confirm the prompt and agree the license agreement shown

5. Once installed, next go to the Project folder in the local machine.

6. Navigate to Packages folder, you should see a folder for Plugin Registration tool below

7. There you go, you can open the Plugin Registration Application under tools folder. You can undo the changes for the Assembly it is linked to Source control.

That’s it, how easy it was? Hope this would help someone.

Cheers,

PMDY

Power Platform Solution Blue Print Review – Quick Recap

The Solution blueprint review is covers all required topics. The workshop can also be conducted remotely. When the workshop is done remotely, it is typical to divide the review into several sessions over several days.

The following sections cover the top-level topics of the Solution blueprint review and provide a sampling of the types of questions that are covered in each section.

Program strategy

Program strategy covers the process and structures that will guide the implementation. It also reviews the approach that will be used to capture, validate, and manage requirements, and the plan and schedule for creation and adoption of the solution.

This topic focuses on answering questions such as:

  • What are the goals of the implementation, and are they documented, well understood, and can they be measured?
  • What is the methodology being used to guide the implementation, and is it well understood by the entire implementation team?
  • What is the structure that is in place for the team that will conduct the implementation?
  • Are roles and responsibilities of all project roles documented and understood?
  • What is the process to manage scope and changes to scope, status, risks, and issues?
  • What is the plan and timeline for the implementation?
  • What is the approach to managing work within the plan?
  • What are the external dependencies and how are they considered in the project plan?
  • What are the timelines for planned rollout?
  • What is the approach to change management and adoption?
  • What is the process for gathering, validating, and approving requirements?
  • How and where will requirements be tracked and managed?
  • What is the approach for traceability between requirements and other aspects of the implementation (such as testing, training, and so on)?
  • What is the process for assessing fits and gaps?

Test strategy

Test strategy covers the various aspects of the implementation that deal with validating that the implemented solution works as defined and will meet the business need.

This topic focuses on answering questions such as:

  • What are the phases of testing and how do they build on each other to ensure validation of the solution?
  • Who is responsible for defining, building, implementing, and managing testing?
  • What is the plan to test performance?
  • What is the plan to test security?
  • What is the plan to test the cutover process?
  • Has a regression testing approach been planned that will allow for efficient uptake of updates?

Business process strategy

Business process strategy considers the underlying business processes (the functionality) that will be implemented on the Microsoft Dynamics 365 platform as part of the solution and how these processes will be used to drive the overall solution design.

This topic focuses on answering questions such as:

  • What are the top processes that are in scope for the implementation?
  • What is currently known about the general fit for the processes within the Dynamics 365 application set?
    • How are processes being managed within the implementation and how do they relate to subsequent areas of the solution such as user stories, requirements, test cases, and training?
    • Is the business process implementation schedule documented and understood?
    • Are requirements established for offline implementation of business processes?

Based on the processes that are in scope, the solution architect who is conducting the review might ask a series of feature-related questions to gauge complexity or understand potential risks or opportunities to optimize the solution based on the future product roadmap.

Application strategy

Application strategy considers the various apps, services, and platforms that will make up the overall solution.

This topic focuses on answering questions such as:

  • Which Dynamics 365 applications or services will be deployed as part of the solution?
  • Which Microsoft Azure capabilities or services will be deployed as part of the solution?
  • What if new external application components or services will be deployed as part of the solution?
  • What if legacy application components or services will be deployed as part of the solution?
  • What extensions to the Dynamics 365 applications and platform are planned?

Data strategy

Data strategy considers the design of the data within the solution and the design for how legacy data will be migrated to the solution.

This topic focuses on answering questions such as:

  • What are the plans for key data design issues like legal entity structure and data localization?
  • What is the scope and planned flow of key master data entities?
  • What is the scope and planned flow of key transactional data entities?
  • What is the scope of data migration?
  • What is the overall data migration strategy and approach?
  • What are the overall volumes of data to be managed within the solution?
  • What are the steps that will be taken to optimize data migration performance?

Integration strategy

Integration strategy considers the design of communication and connectivity between the various components of the solution. This strategy includes the application interfaces, middleware, and the processes that are required to manage the operation of the integrations.

This topic focuses on answering questions such as:

  • What is the scope of the integration design at an interface/interchange level?
  • What are the known non-functional requirements, like transaction volumes and connection modes, for each interface?
  • What are the design patterns that have been identified for use in implementing interfaces?
  • What are the design patterns that have been identified for managing integrations?
  • What middleware components are planned to be used within the solution?

Business intelligence strategy

Business intelligence strategy considers the design of the business intelligence features of the solution. This strategy includes traditional reporting and analytics. It includes the use of reporting and analytics features within the Dynamics 365 components and external components that will connect to Dynamics 365 data.

This topic focuses on answering questions such as:

  • What are the processes within the solution that depend on reporting and analytics capabilities?
  • What are the sources of data in the solution that will drive reporting and analytics?
  • What are the capabilities and constraints of these data sources?
  • What are the requirements for data movement across solution components to facilitate analytics and reporting?
  • What solution components have been identified to support reporting and analytics requirements?
  • What are the requirements to combine enterprise data from multiple systems/sources, and what does that strategy look like?

Security strategy

Security strategy considers the design of security within the Dynamics 365 components of the solution and the other Microsoft Azure and external solution components.

This topic focuses on answering questions such as:

  • What is the overall authentication strategy for the solution? Does it comply with the constraints of the Dynamics 365 platform?
  • What is the design of the tenant and directory structures within Azure?
  • Do unusual authentication needs exist, and what are the design patterns that will be used to solve them?
  • Do extraordinary encryption needs exist, and what are the design patterns that will be used to solve them?
  • Are data privacy or residency requirements established, and what are the design patterns that will be used to solve them?
  • Are extraordinary requirements established for row-level security, and what are the design patterns that will be used to solve them?
  • Are requirements in place for security validation or other compliance requirements, and what are the plans to address them?

Application lifecycle management strategy

Application lifecycle management (ALM) strategy considers those aspects of the solution that are related to how the solution is developed and how it will be maintained given that the Dynamics 365 apps are managed through continuous update.

This topic focuses on answering questions such as:

  • What is the preproduction environment strategy, and how does it support the implementation approach?
  • Does the environment strategy support the requirements of continuous update?
  • What plan for Azure DevOps will be used to support the implementation?
  • Does the implementation team understand the continuous update approach that is followed by Dynamics 365 and any other cloud services in the solution?
  • Does the planned ALM approach consider continuous update?
  • Who is responsible for managing the continuous update process?
  • Does the implementation team understand how continuous update will affect go-live events, and is a plan in place to optimize versions and updates to ensure supportability and stability during all phases?
  • Does the ALM approach include the management of configurations and extensions?

Environment and capacity strategy

Deployment architecture considers those aspects of the solution that are related to cloud infrastructure, environments, and the processes that are involved in operating the cloud solution.

This topic focuses on answering questions such as:

  • Has a determination been made about the number of production environments that will be deployed, and what are the factors that went into that decision?
  • What are the business continuance requirements for the solution, and do all solution components meet those requirements?
  • What are the master data and transactional processing volume requirements?
  • What locations will users access the solution from?
  • What are the network structures that are in place to provide connectivity to the solution?
  • Are requirements in place for mobile clients or the use of other specific client technologies?
  • Are the licensing requirements for the instances and supporting interfaces understood?

Solution blueprint is very essential for an effective Solution Architecture, using the above guiding principles will help in this process.

Thank you for reading…

Hope this helps…

Cheers,

PMDY

Triggers not available in Custom Connectors – Quick Review

Power Platform folks rarely build new custom connectors in a project, while most of them work on existing ones, it is often observed that the triggers are missing from the custom connector, below are the steps you can review if so…

1. Wrong Portal

If you’re building the connector in Power Apps, you won’t see trigger options. ✅ Fix: Use the Power Automate portal to define and test triggers. Only Power Automate supports trigger definitions for custom connectors.

2. Trigger Not Properly Defined

If your OpenAPI (Swagger) definition doesn’t include a valid x-ms-trigger, the trigger won’t appear.

Fix:

  • Make sure your OpenAPI includes a webhook or polling trigger.
  • Example:json"x-ms-trigger": { "type": "Webhook", "workflow": true }

3. Connector Not Refreshed

Sometimes, even after updating the connector, the UI doesn’t refresh.

Fix:

  • Delete and re-add the connector in your flow.
  • Or create a new connection in Power Automate to force a refresh.

4. Licensing or Environment Issues

If you’re in a restricted environment or missing permissions, triggers might not be available.

Fix:

  • Check if your environment allows custom connectors with triggers.
  • Ensure your user role has permission to create and use custom connectors.

5. Incorrect Host/Path in Swagger

If the host or path fields in your Swagger are misconfigured, the connector might fail silently.

Fix:

  • Ensure the host and path are correctly defined.
  • Avoid using just / as a path — use something like /trigger/start instead.

5. Incorrect Environment

Make sure you were in the right environment of the Power Platform, sometimes when juggling things around, we often mistakenly try using connectors from a wrong environment. Do take a note.

Finally you will be able to see Triggers while creating custom connectors…

Hope reviewing these will help…

Cheers,

PMDY