Python + Dataverse Series – #07: Running a Linear Normalization Algorithm on Dataverse Data Using Python

This is continuation in this series of Dataverse SDK for Python, if you haven’t checked out earlier articles, I would encourage to start from the beginning of this series.

Machine learning often begins with one essential step: data preprocessing. Before models can learn patterns, the raw data must be cleaned, scaled, and transformed into a form suitable for analysis. In this example, let me demonstrate how to retrieve numerical data from Microsoft Dataverse and apply a linear normalization algorithm using Python.

Normalization is a fundamental algorithm in machine learning pipelines. It rescales numeric values into a consistent range—typically between 0 and 1—making them easier for algorithms to interpret and compare.

1. Retrieving Data from Dataverse

Using the DataverseClient and Interactive Browser authentication, we connect to Dataverse and fetch the revenue field from the Account table. This gives us a small dataset to run our algorithm on.

from azure.identity import InteractiveBrowserCredential
from PowerPlatform.Dataverse.client import DataverseClient
credential = InteractiveBrowserCredential()
client = DataverseClient("https://ecellorsdev.crm8.dynamics.com", credential)
account_batches = client.get(
"account",
select=["accountid", "revenue"],
top=10,
)

We then extract the revenue values into a NumPy array.

2. Implementing the Linear Normalization Algorithm

The algorithm used here is min–max normalization, defined mathematically as:normalized=xmin(x)max(x)min(x)This algorithm ensures:

  • the smallest value becomes 0
  • the largest becomes 1
  • all other values fall proportionally in between

Here’s the implementation:

import numpy as np
revenues = np.array(revenues)
min_rev = np.min(revenues)
max_rev = np.max(revenues)
normalized_revenues = (revenues - min_rev) / (max_rev - min_rev)

This is a classic preprocessing algorithm used in machine learning pipelines before feeding data into models such as regression, clustering, or neural networks.

3. Visualizing the Normalized Output

To better understand the effect of the algorithm, we plot the normalized values:

import matplotlib.pyplot as plt
plt.plot(normalized_revenues, marker='o')
plt.title('Normalized Revenues from Dataverse Accounts')
plt.xlabel('Account Index')
plt.ylabel('Normalized Revenue')
plt.grid()
plt.show()

The visualization highlights how the algorithm compresses the original revenue values into a uniform scale.

4. Why Normalization Matters

Normalization is not just a mathematical trick—it’s a crucial algorithmic step that:

  • prevents large values from dominating smaller ones
  • improves convergence in optimization-based models
  • enhances the stability of distance‑based algorithms
  • makes datasets comparable across different ranges
#Running Machine Learning Algorithm on data retrieved from Dataverse to run a linear normalization
from azure.identity import InteractiveBrowserCredential
from PowerPlatform.Dataverse.client import DataverseClient
import numpy as np
# Connect to Dataverse
credential = InteractiveBrowserCredential()
client = DataverseClient("https://ecellorsdev.crm8.dynamics.com", credential)
# Fetch account data as paged batches
account_batches = client.get(
"account",
select=["accountid", "revenue"],
top=10,
)
revenues = []
for batch in account_batches:
for account in batch:
if "revenue" in account and account["revenue"] is not None:
revenues.append(account["revenue"])
revenues = np.array(revenues)
# Apply a simple linear algorithm: Normalize the revenues
if len(revenues) > 0:
min_rev = np.min(revenues)
max_rev = np.max(revenues)
normalized_revenues = (revenues – min_rev) / (max_rev – min_rev)
print("Normalized Revenues:", normalized_revenues)
#visualize the result
import matplotlib.pyplot as plt
plt.plot(normalized_revenues, marker='o')
plt.title('Normalized Revenues from Dataverse Accounts')
plt.xlabel('Account Index')
plt.ylabel('Normalized Revenue')
plt.grid()
plt.show()

The use of this code is to transform raw Dataverse revenue data into normalized, machine‑learning‑ready values that can be analyzed, compared, and visualized effectively.

You can download the Python Notebook below if you want to work with VS Code

https://github.com/pavanmanideep/DataverseSDK_PythonSamples/blob/main/Python-RetrieveData-ApplyLinearAlgorithm.ipynb

Once you have opened the Python notebook, you can start to run the code as below

You should see something like below

For authentication in another browser tab, once authenticated, you should be able to see the

Hope you found this useful…it’s going to be interesting, stay tuned for upcoming articles.

Cheers,

PMDY

Python + Dataverse Series – #06: Data preprocessing steps before running Machine Learning Algorithms

Hi Folks,

If you were already a Power Platform Consultant and new to working with Python, then I would encourage to start from the beginning of this series.

Now in this series, we entered an interesting part where Machine learning algorithms were run to analyze Dataverse Data and in this post we will understand why feature scaling is a critical preprocessing step for many machine learning algorithms because it ensures that all features contribute equally to the model’s outcome, prevents numerical instability, and helps optimization algorithms converge faster to the optimal solution

Primarily before running any Machine Learning Algorithm, we need to do some data preprocessing like scaling the data, in this case we will use a formula which is used to scale using min–max normalization (feature scaling to the [0, 1] range).

#preprocessing step before running machine learning algorithms
from azure.identity import InteractiveBrowserCredential #using Interactive Login
from PowerPlatform.Dataverse.client import DataverseClient #installing Python SDK for Dataverse
import numpy as np #import Numpy Library to perform calculations
# Connect to Dataverse
credential = InteractiveBrowserCredential()
client = DataverseClient("https://ecellorsdev.crm8.dynamics.com", credential) #Creates Dataverse Client
# Fetch account data as paged batches
account_batches = client.get(
"account",
select=["accountid", "revenue"],
top=10,
) #Fetches top 10 accounts with accountid, revenue columns
revenues = []
for batch in account_batches:
for account in batch:
if "revenue" in account and account["revenue"] is not None:
revenues.append(account["revenue"])
revenues = np.array(revenues)
#Normalize the revenue
if len(revenues) > 0:
min_rev = np.min(revenues)
max_rev = np.max(revenues)
normalized_revenues = (revenues – min_rev) / (max_rev – min_rev)
print("Normalized Revenues:", normalized_revenues)
#visualize the result
import matplotlib.pyplot as plt
plt.plot(normalized_revenues, marker='o')
plt.title('Normalized Revenues from Dataverse Accounts')
plt.xlabel('Account Index')
plt.ylabel('Normalized Revenue')
plt.grid()
plt.show()

You can download the Python Notebook below if you want to work with VS Code

https://github.com/pavanmanideep/DataverseSDK_PythonSamples/blob/main/Python-PreProcessingStepBeforeMachineLearning.ipynb

Hope you found this useful…

Cheers,

PMDY

Python + Dataverse Series – How to run Python Code in Vs Code

Hi Folks,

As you folks know that Python currently is the number #1 programming language with a massive, versatile ecosystem of libraries for data science, AI, and backend web development. This post kicks off a hands‑on series about working with Microsoft Dataverse using Python. We’ll explore how to use the Dataverse SDK for Python to connect with Dataverse, automate data operations, and integrate Python solutions across the broader Power Platform ecosystem. Whether you’re building data-driven apps, automating workflows, or extending Power Platform capabilities with custom logic, this series will help you get started with practical, real‑world examples.

https://www.microsoft.com/en-us/power-platform/blog/2025/12/03/dataverse-sdk-python/

With the release of the Dataverse SDK for Python, building Python-based logic for the Power Platform has become dramatically simpler. In this post, we’ll walk through how to download Python and set it up in Visual Studio Code so you can start building applications that interact with Dataverse using Python. Sounds exciting already. Let’s dive in and get everything set up..

1. Download and install Python from official website below and then install it in your computer.

https://www.python.org/ftp/python/3.14.3/python-3.14.3-amd64.exe

2. Install VS Code

Important: During installation, make sure to check “Add Python to PATH”. This ensures VS Code can detect Python automatically.

3. After installation, open VS Code and install the Python extension (Microsoft’s official one). This extension enables IntelliSense, debugging, and running Python script

4. That’s it, you are now able to run Python logic inside Vs Code

5. Create or Open a Python file in the system, opened a sample file below

5. If you want to run Python Programmes in your VS Code, follow below options

a. Select Start Debugging

b. You will be prompted a window like below

You can select the first option highlighted above, it automatically runs your Python Code

This is very easy to setup…

If you want to continue reading this series, check out the next article.

Hope this helps…

Cheers,

PMDY

Python + Dataverse Series – #05: Remove PII

Hi Folks,

This is in continuation in the Python + Dataverse series, it is worth checking out from the start of this series here.

At times, there will be a need to remove PII(Personally Identifiable Information) present in the Dataverse Environments, for this one time task, you can easily run Python script below, let’s take example of removing PII from Contact fields in the below example.

from azure.identity import InteractiveBrowserCredential
from PowerPlatform.Dataverse.client import DataverseClient
# Connect to Dataverse
credential = InteractiveBrowserCredential()
client = DataverseClient("https://ecellorsdev.crm8.dynamics.com", credential)
#use AI to remove PII data from the dataverse records, let's say contact records
def remove_pii_from_contact(contact):
pii_fields = ['emailaddress1', 'telephone1', 'mobilephone', 'address1_line1', 'address1_city', 'address1_postalcode']
for field in pii_fields:
if field in contact:
contact[field] = '[REDACTED]'
return contact
# Fetch contacts with PII (Dataverse client returns paged batches)
contact_batches = client.get(
"contact",
select=[
"contactid",
"fullname",
"emailaddress1",
"telephone1",
"mobilephone",
"address1_line1",
"address1_city",
"address1_postalcode",
],
top=10,
)
# Remove PII and update contacts
for batch in contact_batches:
for contact in batch:
contact_id = contact.get("contactid")
sanitized_contact = remove_pii_from_contact(contact)
# Prepare update data (exclude contactid)
update_data = {key: value for key, value in sanitized_contact.items() if key != "contactid"}
# Update the contact in Dataverse
client.update("contact", contact_id, update_data)
print(f"Contact {contact_id} updated with sanitized data: {sanitized_contact}")

If you want to work on this, download the Python Notebook to use in VS Code…

https://github.com/pavanmanideep/DataverseSDK_PythonSamples/blob/main/Python-DataverseSDK-RemovePII.ipynb

Cheers,

PMDY

Python + Dataverse Series – Post #03: Create, Update, Delete records via Web API

Hi Folks,

This is continuation in this Python with Dataverse Series, in this blog post, we will perform a full CRUD(Create, Retrieve, Update, Delete) in Dataverse using Web API.

Please use the below code for the same…to make any calls using WEB API to Dataverse.

import pyodbc
import msal
import requests
import json
import re
# Azure AD details
client_id = 'XXXX'
client_secret = 'XXXX'
tenant_id = 'XXXX'
authority = f'https://login.microsoftonline.com/{tenant_id}'
resource = 'https://XXXX.crm8.dynamics.com'
# SQL endpoint
sql_server = 'XXXX.crm8.dynamics.com'
database = 'XXXX'
# Get token with error handling
try:
print(f"Attempting to authenticate with tenant: {tenant_id}")
print(f"Authority URL: {authority}")
app = msal.ConfidentialClientApplication(client_id, authority=authority, client_credential=client_secret)
print("Acquiring token…")
token_response = app.acquire_token_for_client(scopes=[f'{resource}/.default'])
if 'error' in token_response:
print(f"Token acquisition failed: {token_response['error']}")
print(f"Error description: {token_response.get('error_description', 'No description available')}")
else:
access_token = token_response['access_token']
print("Token acquired successfully and your token is!"+access_token)
print(f"Token length: {len(access_token)} characters")
except ValueError as e:
print(f"Configuration Error: {e}")
print("\nPossible solutions:")
print("1. Verify your tenant ID is correct")
print("2. Check if the tenant exists and is active")
print("3. Ensure you're using the right Azure cloud (commercial, government, etc.)")
except Exception as e:
print(f"Unexpected error: {e}")
#Get 5 contacts from Dataverse using Web API
import requests
import json
try:
#Full CRUD Operations – Create, Read, Update, Delete a contact in Dataverse
print("Making Web API request to perform CRUD operations on contacts…")
# Dataverse Web API endpoint for contacts
web_api_url = f"{resource}/api/data/v9.2/contacts"
# Base headers with authorization token
headers = {
'Authorization': f'Bearer {access_token}',
'OData-MaxVersion': '4.0',
'OData-Version': '4.0',
'Accept': 'application/json',
'Content-Type': 'application/json'
}
# Create a new contact
new_contact = { "firstname": "John", "lastname": "Doe" }
print("Creating a new contact…")
# Request the server to return the created representation. If not supported or omitted,
# Dataverse often returns 204 No Content and provides the entity id in a response header.
create_headers = headers.copy()
create_headers['Prefer'] = 'return=representation'
response = requests.post(web_api_url, headers=create_headers, json=new_contact)
created_contact = {}
contact_id = None
# If the API returned the representation, parse the JSON
if response.status_code in (200, 201):
try:
created_contact = response.json()
except ValueError:
created_contact = {}
contact_id = created_contact.get('contactid') or created_contact.get('contactid@odata.bind')
print("New contact created successfully (body returned).")
print(f"Created Contact ID: {contact_id}")
# If the API returned 204 No Content, Dataverse includes the entity URL in 'OData-EntityId' or 'Location'
elif response.status_code == 204:
entity_url = response.headers.get('OData-EntityId') or response.headers.get('Location')
if entity_url:
# Extract GUID using regex (GUID format)
m = re.search(r"([0-9a-fA-F]{8}-[0-9a-fA-F]{4}-[0-9a-fA-F]{4}-[0-9a-fA-F]{4}-[0-9a-fA-F]{12})", entity_url)
if m:
contact_id = m.group(1)
created_contact = {'contactid': contact_id}
print("New contact created successfully (no body). Extracted Contact ID from headers:")
print(f"Created Contact ID: {contact_id}")
else:
print("Created but couldn't parse entity id from response headers:")
print(f"Headers: {response.headers}")
else:
print("Created but no entity location header found. Headers:")
print(response.headers)
else:
print(f"Failed to create contact. Status code: {response.status_code}")
print(f"Error details: {response.text}")
# Read the created contact
if not contact_id:
# Defensive: stop further CRUD if we don't have an id
print("No contact id available; aborting read/update/delete steps.")
else:
print("Reading the created contact…")
response = requests.get(f"{web_api_url}({contact_id})", headers=headers)
if response.status_code == 200:
print("Contact retrieved successfully!")
contact_data = response.json()
print(json.dumps(contact_data, indent=4))
else:
print(f"Failed to retrieve contact. Status code: {response.status_code}")
print(f"Error details: {response.text}")
# Update the contact's email
updated_data = { "emailaddress1": "john.doe@example.com" }
response = requests.patch(f"{web_api_url}({contact_id})", headers=headers, json=updated_data)
if response.status_code == 204:
print("Contact updated successfully!")
else:
print(f"Failed to update contact. Status code: {response.status_code}")
print(f"Error details: {response.text}")
# Delete the contact
response = requests.delete(f"{web_api_url}({contact_id})", headers=headers)
if response.status_code == 204:
print("Contact deleted successfully!")
else:
print(f"Failed to delete contact. Status code: {response.status_code}")
print(f"Error details: {response.text}")
except requests.exceptions.RequestException as e:
print(f"Request error: {e}")
except KeyError as e:
print(f"Token not available: {e}")
except Exception as e:
print(f"Unexpected error: {e}")
view raw FullCRUDWebAPI hosted with ❤ by GitHub

You can use the VS Code as IDE, copy the above code in a python file, next click on Run Python File at the top of the VS Code

Hope this helps someone making Web API Calls using Python.

If you want to try this out, download the Python Notebook and open in VS Code.

https://github.com/pavanmanideep/DataverseSDK_PythonSamples/blob/main/Python-Dataverse-FullCRUD-03.ipynb

Looking to continue following this series, don’t forget the next article in this series

Cheers,

PMDY

Building a Cloud-Native Power Apps ALM Pipeline with GitLab, Google Cloud Build and PAC CLI: Streamlining Solution Lifecycle Automation

A unique combination to achieve deployment automation of Power Platform Solutions

Hi Folks,

This post is about ALM in Power Platform integrating with a different ecosystem than usual, i.e. using Google Cloud, sounds interesting..? This approach is mainly intended for folks using Google Cloud or GitLab as part of their implementation.

Integrating Google Cloud Build with Power Platform for ALM (Application Lifecycle Management) using GitLab is feasible and beneficial. This integration combines GitLab as a unified DevOps platform with Google Cloud Build for executing CI/CD pipelines, enabling automated build, test, export, and deployment of Power Platform solutions efficiently. This was the core idea for my session on Friday 28 November, at New Zealand Business Applications Summit 2025.

Detailed Steps for this implementation

Create an access token in GitLab for API Access and Read Access

Click on Add new token, you can select at the minimum the below scopes while you were working with CI-CD using GitLab

Create a host connection for the repository in GitLab

Specify the personal access token created in the previous step

Link your repository

The created host connections in the previous step will be shown under Connec ctions drop down

Create Trigger in Google Cloud Build

Click on Create trigger above, provide a name, select a nearest region

Event:

For now, I am choosing Manual invocation for illustration

Specify where the name of the Repository where your YAML in GitLab resides

You can optionally specify the substitution variables which are nothing but parameters you can pass to your pipeline from Google Cloud Build Configuration

You can optionally give this for any approval and choose the service account tagged to your google account in the drop down.

Click on Save.

Next proceed to GitLab YAML

You can find the full code below

steps:
– id: "export_managed"
name: "mcr.microsoft.com/dotnet/sdk:9.0"
entrypoint: "bash"
args:
– "-c"
– |
echo "=== 🏁 Starting Export Process ==="
# ✅ Define solution name from substitution variable
SOLUTION_NAME="${_SOLUTION_NAME}"
# ✅ Install PAC CLI
mkdir -p "${_HOME}/.dotnet/tools"
dotnet tool install –global Microsoft.PowerApps.CLI.Tool –version 1.48.2 || true
# Add dotnet global tools dir to the shell PATH for this step/session (preserve existing PATH)
export PATH="$_PATH:${_HOME}/.dotnet/tools"
echo "=== 🔐 Authenticating to Power Platform Environment ==="
pac auth create –name "manual" –url "https://ecellorsdev.crm8.dynamics.com" –tenant "XXXXX-XXXX-XXXXX-XXXXXX-XXXXX" –applicationId "XXXXXXXXXXXXXX" –clientSecret "XXXXXXXXXXXXXXXX"
pac auth list
echo "=== 📦 Exporting Solution: ${_SOLUTION_NAME} ==="
pac solution export \
–name "${_SOLUTION_NAME}" \
–path "/tmp/${_SOLUTION_NAME}.zip" \
–managed true \
–environment "${_SOURCE_ENV_URL}"
echo "=== ✅ Solution exported to /tmp/${_SOLUTION_NAME}.zip ==="
echo "=== 🔐 Authenticating to Target Environment ==="
pac auth create \
–name "target" \
–url "https://org94bd5a39.crm.dynamics.com" \
–tenant "XXXXXXXXXXXXXXXXXXXXXXXX" \
–applicationId "XXXX-XXXXX-XXXXX-XXXXXX" \
–clientSecret "xxxxxxxxxxxxxxxxxxxx"
echo "=== 📥 Importing Solution to Target Environment ==="
pac solution import \
–path "/tmp/${_SOLUTION_NAME}.zip" \
–environment "${_TARGET_ENV_URL}" \
–activate-plugins \
–publish-changes
echo "=== 🎉 Solution imported successfully! ==="
options:
logging: CLOUD_LOGGING_ONLY
substitutions:
_SOLUTION_NAME: "PluginsForALM_GitLab"
_SOURCE_ENV_URL: "https://org.crm.dynamics.com"
_TARGET_ENV_URL: "https://org.crm.dynamics.com"
_TENANT_ID: "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
_CLIENT_ID: "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
_CLIENT_SECRET: "xxxxxxxxxxxxxxxxxxxxxxxxxxxx"
_SOLUTIONS_DIR: "/workspace/Plugins/08112025"
view raw GitLabDemo.yaml hosted with ❤ by GitHub

Solution from Source Environment

Now lets run the trigger which will export the solution from the source environment and import to the target environment….we have manual trigger, automatic trigger whenever there is an commit to the repo in GitLab etc., you may pick whatever suits your needs best.

Solution imported to the target environment using Google Cloud Build

The below table illustrates key differences between Google Cloud Build and Azure Devops….

AspectGoogle Cloud BuildAzure DevOps Build Pipelines
Pricing ModelPay-as-you-go with per-second billingPer-minute billing with tiered pricing
Cost OptimizationSustained use discounts, preemptible VMsReserved capacity and enterprise agreements
Build EnvironmentServerless, container-native, managed by Google CloudRequires self-hosted or paid hosted agents
Free TierAvailable with build minutes and creditsAvailable but more limited
Operational OverheadLow, no need to manage build agentsHigher, managing agents or paying for hosted agents
Ideal ForVariable, short, or containerized workloadsLarge Microsoft-centric organizations
Integration Cost ImpactTightly integrated with Google Cloud serverless infrastructureIntegrated with Microsoft ecosystem but may incur additional licensing costs

Conclusion:

PAC CLI is a powerful command-line tool that automates authentication, environment, and solution management within Power Platform ALM, enabling consistent and repeatable deployment workflows. It integrates smoothly with DevOps tools like GitLab and Google Cloud Build, helping teams scale ALM practices efficiently while maintaining control and visibility over Power Platform environments. Just note, my intention was showcase the power of PAC CLI with wider ecosystem, not only with Microsoft.

Cheers,

PMDY

Triggers not available in Custom Connectors – Quick Review

Power Platform folks rarely build new custom connectors in a project, while most of them work on existing ones, it is often observed that the triggers are missing from the custom connector, below are the steps you can review if so…

1. Wrong Portal

If you’re building the connector in Power Apps, you won’t see trigger options. ✅ Fix: Use the Power Automate portal to define and test triggers. Only Power Automate supports trigger definitions for custom connectors.

2. Trigger Not Properly Defined

If your OpenAPI (Swagger) definition doesn’t include a valid x-ms-trigger, the trigger won’t appear.

Fix:

  • Make sure your OpenAPI includes a webhook or polling trigger.
  • Example:json"x-ms-trigger": { "type": "Webhook", "workflow": true }

3. Connector Not Refreshed

Sometimes, even after updating the connector, the UI doesn’t refresh.

Fix:

  • Delete and re-add the connector in your flow.
  • Or create a new connection in Power Automate to force a refresh.

4. Licensing or Environment Issues

If you’re in a restricted environment or missing permissions, triggers might not be available.

Fix:

  • Check if your environment allows custom connectors with triggers.
  • Ensure your user role has permission to create and use custom connectors.

5. Incorrect Host/Path in Swagger

If the host or path fields in your Swagger are misconfigured, the connector might fail silently.

Fix:

  • Ensure the host and path are correctly defined.
  • Avoid using just / as a path — use something like /trigger/start instead.

5. Incorrect Environment

Make sure you were in the right environment of the Power Platform, sometimes when juggling things around, we often mistakenly try using connectors from a wrong environment. Do take a note.

Finally you will be able to see Triggers while creating custom connectors…

Hope reviewing these will help…

Cheers,

PMDY

The refresh token has expired due to inactivity when connecting to Power Pages using Power Apps CLI – Quick Fix

Hi Folks,

This post is about a quick fix for an error occurred with Power Apps CLI.

I was trying to connect to my organization using CLI and that’s when I encountered this error.

Prerequisites:

Power Apps CLI, Visual Studio Code

After installing the prerequisites, I was trying to connect to my Power Pages available in my organization from VS Code terminal using below command.

pac paportalist 

It’s then I encountered the below error

It’s then I understood that due to inactivity, it is failing…

Your Power Platform CLI connection is failing due to an expired refresh token and an ExternalTokenManagement Authentication configuration issue. Here’s how you can resolve it:

Fix:

Reauthenticate with Dataverse

pac auth clear
pac auth create --url https://orgXXX.crm8.dynamics.com --username admin@Ecellors.onmicrosoft.com --password [your password]

Creating new authentication profile resolves this issue…

    Now try to run the above command.

    This should prompt a new login window to authenticate your request, provide the details and you should be able to login.

    Hope this helps..

    Cheers,

    PMDY

    Deploy dependent assemblies easily using PAC CLI

    Hi Folks,

    This is another post related to Plugins in Dynamics 365 CE.

    Considering medium to large scale implementations, there isn’t a single Power Platform Project which don’t require merging of external assemblies.

    We relied on ILMerge to merge those assemblies into a single DLL. We used to search for ILMerge assemblies in Nuget and installed them for use.

    Then the plugins are signed in for several reasons, primarily related to security, assembly integrity, and versioning of the sandbox worker process.

    But either of the above are no longer needed with the help of Dependent Assembly feature…with few simple steps, you can build the Plugin…Interesting, isn’t it, read on…

    Pre requisites:

    • Download Visual Studio 2022 Community Edition here
    • Download VS Code from here
    • Download Plugin registration tool from here
    • Download PAC CLI from here
    • Download and install NuGet Package Explorer from this link NuGet Package Explorer open the NuGet Package Explorer

    Avoid Direct Plugin Project Creation in Visual Studio

    • Never create a Plugin project directly from Visual Studio or any other IDE here after.
    Use Microsoft PowerApps CLI instead
    • Always use Power Apps CLI as it easy and only requires a single command to create the entire Plugin project scaffolding
    • This ensures a standardized and reliable development environment.
    • It automatically creates a Nuget Package file that will be used to avoid ‘Could not load assemblies or its dependencies‘.

    Ok, let’s begin.

    Once you have downloaded all the prerequisites mentioned, make sure you have installed them in your local machine. Others are straight forward to download, for NuGet Package explorer, you need to search in Windows store to install.

    1. Create a local folder for the Plugins

    Navigate to that folder from VS Code

    Now open terminal, run the pac command as below

    Execute the following command to create plugin project 

    • Browse to the directory where you want to create the plugin project
    • Execute the command on CMD to create plugin project “pac plugin init

    A plugin project will be created at your desired location as follows

    Plugin project in local folder will be created as below

    That’s it, you can close the VS Code for now.

    Click on the CS Proj file and open it in Visual Studio

    By default, 2 files are automatically created when you create a plugin project as shown above.

    Now will install Bouncy Castle which is an external library, right click on the Plugin Solution –> Manage Nuge Packages

    I have added Bouncy Castle NuGet Package to my plugin project for Encryption and Decryption. You can have your own required NuGet Package as per your need.

    Build your project

    After a successful build, you will get the output result as follows

    Browse the directory of your project

    Open the file Plugin_Project.1.0.0.nupkg in Nuget Package Explorer by double clicking it

    Now you can see that this nuget package file contains the information related to the added nuget package of Bouncy Castle that we want to include in our plugin project package as follows. In your case, you can have the required nuget package that you want to add 

    Now open up plugin registration tool

    Click to create new connection

    Provide login details and login

    Click to Register New Package

    Browse to the directory where your nuget package file was created automatically when you build the project and import this file 

    Select the Command Data Service Default Solution and import it

    Click on view and Display by package

    Now your Plugin Project is successfully registered with all dependent assemblies and ready to use.

    While this post gives you a structure on how you can do build a plugin assembly, you can add the business logic as per your need.

    Conclusion:

    In conclusion, navigating the intricacies of Microsoft Dynamics 365 CRM plugins demands a nuanced approach, especially when dealing with NuGet Packages and dependent assemblies. This article has delved into the critical process of resolving the persistent ‘Could not load assemblies or its dependencies‘ issue, offering a comprehensive, step-by-step demonstration.

    By following the recommended best practices, such as avoiding direct plugin project creation in Visual Studio and harnessing the power of Microsoft PowerApps CLI, developers can establish a standardized and reliable development environment. The CLI’s automatic creation of a NuGet Package file not only streamlines the process but also reduces the errors.

    To further facilitate your journey, prerequisites such as downloading and installing essential tools like the Plugin Registration tool, Microsoft PowerApps CLI, and NuGet Package Explorer are highlighted. The guide emphasizes the significance of these tools in ensuring a smooth plugin development experience.

    By adopting these practices and incorporating the suggested steps into your workflow, you not only troubleshoot existing issues but also fortify your understanding of the entire process. Take charge of your Dynamics 365 CRM plugin development, elevate your skills, and sidestep common pitfalls by mastering the art of handling NuGet Packages and dependencies seamlessly.

    References:

    Build and package plug-in code

    Cheers,

    PMDY 

    Microsoft Power Platform Center of Excellence (CoE) Starter Kit – Core Components – Setup wizard – Learn COE #02

    Hi Folks,

    This post is continuation to my previous post on COE Starter Kit, if in case you have just landed on this page. I would suggest go here and check out my blog post on introduction to COE Starter Kit.

    Important:

    Do test out each and every component, rolling out to production without testing as you need to keep in mind that there were many flows which can trigger emails to users which may keep them annoyed.

    You need to install the components present in the COE Starter Kit extracted folder in the dedicated environment, preferably Sandbox environment (not in Default environment, so that you can test it out first before moving changes to Production), make sure you have Dataverse installed in the environment. First let’s install the Solutions and later we can proceed to customize them.

    Install CenterofExcellenceCoreComponents managed solution from your extracted folder, the exact version may be different and differ as the time goes at the time of installing this, the version was as below CenterofExcellenceCoreComponents_4.24_managed

    Then proceed to click on Import as we will be configuring these environment variables whenever required later. It takes a couple of seconds to process, it asks to set the connections which I had talked about in previous post, just create new connection if one not available and click next. Make sure you have green checkboxes for each connection, and you are good to click next.

    Then you will be presented with the screen to input Environment variables as below, we will configure later so for now, just proceed by clicking on Import button.

    The import process may take a while like around 15 minutes, once imported, you should see a notification message on your screen something like below.

    Step 1:

    You will have a bunch of Apps, Flows installed in your environment. Configure the COE Settings by opening the Centre of Excellence setup and upgrade wizard from the installed Center of Excellence – Core Components managed solution.

    It should look something like below when opened. You will be presented with some prerequisites

    Proceed with this step-by-step configuration, you don’t need to change any of the setting, just proceed by clicking on Next.

    Step 2: In this step, you can configure different communication groups to coordinate by creating different personas

    You can click on Configure group, choose the group from the drop down and enter the details and click create a group.

    Provide a group name and email address without domain in the next steps and proceed to create a group, these were actually Microsoft 365 groups.

    Once you have setup, it should show..

    However, this step is optional, but for efficient tracking and maximum benefit of COE, it is recommended to set this up.

    Step 3: While the tenant Id gets populated automatically. Make sure to select no here instead of yes if you were using Sandbox or Production Environment and configure your Admin email and click Next.

    Step 4: Configure the inventory data source.

    Tip: In case you were not able to see the entire content in the page, you can minimize the Copilot and press F11 so that entire text in the page would be visible to you.

    This is required for the Power Platform Admin Connectors to crawl your tenant data and store them in Dataverse tables. This is similar to how search engines crawl entire internet to show any search results. While Data export is in preview, so we proceed with using Cloud flows.

    Click Next.

    Step 5:

    This step is Run the setup flows, click on refresh to start the process. In the background, all the necessary admin flows will be running. Refresh again after 15 minutes to see all the 3 admin flows are running and collecting your tenant data as below and click Next.

    Step 6:

    In the next step, make sure you set all the inventory flows to On.

    By the way inventory flows are a set of flows that are repeatedly gathering a lot of information about your Power Platform tenant. This includes all Canvas Apps, Model Driven Apps, Power Pages, Cloud Flows, Desktop Flows, Power Virtual Agent Bots, Connectors, Solutions and even more.

    To enable them, open the COE Admin Command Center App from Center of Excellence – Core Components Solution. Make sure you turn on all the flows available.

    So, after turning on all the flows, come back and check on Center of Excellence Wizard Setup, you should see a message something like below saying all flows have been turned on.

    Configure data flows is optional, as we haven’t configured it earlier, this step would be skipped.

    Step 7: In the next step, all the Apps came in with Power Platform COE Kit should be shared accordingly based on your actual requirement to different. personas.

    Step 8:

    This part of the wizard currently consists of a collection of links to resources, helping to configure and use the Power BI Dashboards included in the CoE.

    Finish

    Once you click Done, you will be presented with more features to setup.

    These setups have similar structure but varies a bit based on the feature architecture.

    As we got started with setting Starter Kit and had set up the Core Components of the Starter Kit which is important one, now you can keep customizing further, in the future posts, we will see how we can set up Center of Excellence – Governance Components, Center of Excellence – Innovation Backlog. These components are required to finally set up the Power BI Dashboard and use effectively to plan your strategy.

    Everyone who’s ever installed or updated the CoE knows how time-consuming it can be. Not just the setup procedure, but also the learning process, the evaluation and finally the configuration and adoption of new features. It’s definitely challenging to keep up with all this. Especially since new features are delivered almost every month. This attempt from me is to try my best to keep it concise, yet making you understand the process.

    While such setup wizard is clear and handy resource to get an overview of the CoE architecture and a great starting point for finding any documentation. This simplifies administration, operations, maintenance and may be even customizations.

    If you face issues using the COE Starter Kit, you can always report them at https://aka.ms/coe-starter-kit-issues

    Hope this helps…. someone setting up COE starter kit…. if you have any feedback or questions, do let me know in comments….

    Cheers,

    PMDY