Dataverse – Git Integration – Preview – Quick Review

Hi Folks,

This post is about Dataverse and Git Integration which is the most sought after feature in the todays automation Era. This is a preview feature, you would need to create a new environment with Early Access enabled to test this feature or you can use an existing US Preview environment for testing this out.

While every MDA(Model Driven Application) and it’s components can be safely and moved across the environments using Solutions with the help of Azure DevOps Pipelines. However when coming to integrating Power Platform Solutions to Azure DevOps, we had to manually export the solution and download them each and every time when we would like to commit the Solution Artifacts to Azure DevOps Repo.

With this new Preview feature we can directly integrate the Power Platform Solutions to Azure DevOps.

Let’s see this action…wait a moment, there were some prerequisites to be considered…

  1. Environment should be a Managed Environment to start using this and you need to be an Admin for the environment
  2. Azure DevOps subscription and license should be available to set this up, also permission to read source files and commits(should be a member of contributor group in Azure DevOps) from a Repo
  3. Your email address used for Azure DevOps and Power Platform Solutions should be the same

Setup:

Connecting Dataverse with Azure DevOps is easy but requires a bit of understanding of the Binding options available.

Well, there were two types of Binding options

  1. Environment Binding – Single root folder binds to all the unmanaged solutions in the environment
  2. Solution Binding – Different solutions uses a different root folder in Azure DevOps for binding

Note: Once the binding is setup, there isn’t away to change, so set this up carefully, else you may need to delete the folder and create a new one in Azure DevOps.

Let’s see one by one…for demoing purpose, I have created two projects in Azure DevOps Instance

  1. Solution Binding: When we use this, all the components will be available as pending changes
  2. Environment Binding: When we use this, all the unmanaged solution components will be mapped to one Azure DevOps root folder. Let’s set this up.

We are currently able to use only Solution binding, as Environment Binding doesn’t show up any changes to be committed, but there is a catch here.

We can set up for Environment binding and verify if the Solution components are getting marked as pending changes or not. Do note that Setting up the Binding is a one time activity for environment, once setup, it can’t be changed from one type to another.

Open https://make.powerapps.com and navigate to solutions and click on ellipses as below

Once clicked on Connect to Git

Since we were currently using Environment binding, let’s select the Connection Type as Environment

Then click on Connect, once connected, you should a alert message in power apps maker portal at the top.

Now create a new solution as below named ecellors Solution

Verify the integration by clicking on Git Integration as below

It should show as below

Now let’s add few components to the solution we created

Once added, let’s publish the unmanaged solution and verify it..

Do look closely, you should see a Source Control icon highlighted in yellow color for illustration.

Also, you should see a commit option available at the top

You should now be able to commit the solution components as if you are committing the code changes.

It also specifies the branch to which we were commiting…

While it takes few minutes unlike pushing the code to Azure DevOps to push the changes, however this would depend based on the number of solution components you were pushing..once it is done, it will show a commit message like below…

Now let’s verify our Azure DevOps Repo..for this you can go back to the main solutions page, click on Git Connection at the top..

After clicking on Git Connection, click on the link to Microsoft Azure DevOps as below

Then you should be navigated to Azure DevOps folder as below where all the solution files will be tracked component wise.

Now we will move back to Power Apps maker portal and make some changes to any of the components inside the solution…

Let’s say, I just edited the flow name and created a new connection reference, saved and published the customizations.

If you did some changes at the Azure DevOps repo level, you can come back and click on Check for updates, if there were any conflicts between changes done in Azure DevOps and component in solution, it will be shown as conflict.

We now have 3 component changes and all were listed here…you can click on Commit.

As soon as the changes are committed, you should see a message saying Commit Successful and 0 Changes, 0 Updates, 0 Conflicts.

Now you successfully integrated Dataverse Solution components with Azure DevOps without any manual intervention required while deploying solutions using Azure DevOps Pipelines.

Hope you learned something new today…while feature is still in Preview and only available for early release, while couple of issues still need to fixed by Microsoft.

I have tested this feature by creating an environment in US Preview region and this feature will be a good value to projects using Automation and this solution repository can be further deployed to other environments using Azure DevOps Pipelines.

This will be rolled out soon next year, hope you learned something new today…

Cheers,

PMDY

Microsoft Power Platform Center of Excellence (CoE) Starter Kit – Basics – Learn COE #01

Hi Folks,

This is an introductory post, but it’s worth going through where I will be sharing basics about using Centre of Excellence(COE) in Power Platform. Let’s get started.

So, what’s Center of Excellence? COE plays a key role in deriving strategy and move forward in this fast-paced world to keep up with the innovation. Firstly, we may need to ask ourselves few questions…Do your organization have lot of flows, apps and copilots aka power virtual agents? Do you want to effective manage them? Then how you want to move forward…using COE Starter kit is a great choice. It is absolutely free to download, the starter kit is a collection of components and tools which will help to oversee and adopt Power Platform Solutions. The assets part of the CoE Starter Kit should be seen as a template from which you inherit your individual solution or can serve as inspiration for implementing your own apps and flows.

There were some prerequisites before you can install your COE Starter Kit. Many of the medium to large scale enterprise Power Platform implementations should be possessing in their tenant.

  1. Microsoft Power Platform Service Admin, global tenant admin, or Dynamics 365 service admin role.
  2. Dataverse is the foundation for the kit.
  3. Power Apps Per User license (non-trial) and Microsoft 365 license.
  4. Power Automate Per User license, or Per Flow licenses (non-trial).
  5. The identity must have access to an Office 365 mailbox that has the REST API enabled meeting the requirements of Outlook connector.
  6. Make sure you enable the Power Apps Code Components in Power Platform Admin Center
  7. If you want to track unique users and app launches, you need to have Azure App Registration having access to Microsoft 365 audit log.
  8. If you would like to share the reports in Power BI, minimally you require a Power BI pro license.
  9. Setting up communication groups to talk between Admins, Makers and Users.
  10. Create 2 environments, 1 for test and 1 for production use of Starter Kit
  11. Install Creator Kit in your environment by downloading the components from here

The following connectors should be allowed to effectively use data loss prevention policies(DLP)

Once you were done checking the requirements, you can download from the starter kit here.

You can optionally install from App Source here or using Power Platform CLI here.

The kit provides some automation and tooling to help teams build monitoring and automation necessary to support a CoE.

While we saw what advantages are of having COE in your organization and other prerequisites. In the upcoming blog post, we will see how you can install COE starter kit in your Power Platform tenant and set it up to effectively plan your organization resources for highest advantage.

Cheers,

PMDY

Using Preferred Solution in Power Apps saves you time..Quick Review

Hi Folks,

Today, I will be pointing out the advantages of using Preferred Solution and it’s consequences of using or removing it…while the feature is out there from quite few months, yet many of the Power Platform Projects are not utilizing this feature, it can reduce your hassles when many people are working together in a team and you can make sure everyone’s changes go to this solution.

Here we understand what Preferred Solution means to the makers, firstly in order to use this affectively, let’s turn the feature to create Canvas Apps & Cloud Flows in Solutions by enabling this preview feature as suggested below from https://admin.powerplatform.com, this is not mandatory step but would be better as you can add Power Automate flows and Canvas Apps in the Solution and click Save.

Next navigate to https://make.powerapps.com –> Solutions –> Set preferred solution

If no preferred solution is set, by default, it will show the Common Data Service Default Solution to set as Default Solution, if you wish to set another Solution, you can select the respective Solution from the drop down.

Enable/Disable the toggle to show Preferred Solution option in the Solutions Page.

Just click on Apply.

Advantages:

  1. Once preferred Solution is set, any components added by the makers would by default go the Preferred Solution, so makers need not worry about choosing right Solution while creating Power Platform Components.
  2. No need to worry if the solution components will be added in the default solution as the new components will be added to the preferred solution automatically.

Limitations:

  1. Preferred Solutions can be only set in Modern Designer
  2. Components created in Classic Designer won’t go to Preferred Solutions
  3. Custom Connector, Connections, DataFlows, Canvas Apps created from Image or Figma Design, Copilots/Agents, Gateways

You can always delete your preferred solution so that other makers can set their preferred solution, but do this with caution so that none of your team members or your works gets impacted.

Hope this saves few seconds of your valuable time…

Cheers,

PMDY

When to use NO-LOCK in SQL – Quick Review

Hi Folks,

Well this post is not related to Power Platform, but I want to bring it on here to specify the significance of using NOLOCK in Power Platform Implementations using SQL Server.

Recently during our Deployment activity, we had a SSIS job which is writing a lot of data into SQL Server, at the same time, we were trying to read the data from the same table. I received never ending Executing query … message. It is when I had arguments on this, hence I would like to share the significance of NOLOCK.

The default behaviour in SQL Server is for every query to acquire its own shared lock prior to reading data from a given table. This behaviour ensures that you are only reading committed data. However, the NOLOCK table hint allows you to instruct the query optimizer to read a given table without obtaining an exclusive or shared lock. The benefits of querying data using the NOLOCK table hint is that it requires less memory and prevents deadlocks from occurring with any other queries that may be reading similar data. 

In SQL Server, the NOLOCK hint, also known as the READUNCOMMITTED isolation level, allows a SELECT statement to read data from a table without acquiring shared locks on the data. This means it can potentially read uncommitted changes made by other transactions, which can lead to what’s called dirty reads.

Here’s an example:

Let’s say you have a table named Employee with columns EmployeeID and EmployeeName.

CREATE TABLE Employee (
    EmployeeID INT,
    EmployeeName VARCHAR(100)
);

INSERT INTO Employee (EmployeeID, EmployeeName)
VALUES (1, 'Alice'), (2, 'Bob'), (3, 'Charlie');

Now, if two transactions are happening concurrently:

Transaction 1:

BEGIN TRANSACTION
UPDATE Employee
SET EmployeeName = 'David'
WHERE EmployeeID = 1;

Transaction 2:

SELECT EmployeeName
FROM Employee WITH (NOLOCK)
WHERE EmployeeID = 1;

If Transaction 2 uses WITH (NOLOCK) when reading the Employee table, it might read the uncommitted change made by Transaction 1 and retrieve 'David' as the EmployeeName for EmployeeID 1. However, if Transaction 1 rolled back the update, Transaction 2 would have obtained inaccurate or non-existent data, resulting in a dirty read.

Key takeaways about NOLOCK:

  • Pros: Reduces memory use, avoids blocking, speeds up reads.
  • Cons: May read uncommitted or inconsistent data.

Using NOLOCK can be helpful in scenarios where you prioritize reading data speed over strict consistency. So, in my case as I want to just view the data, using NOLOCK is good without locking the query. However, it’s essential to be cautious since it can lead to inconsistent or inaccurate results, especially in critical transactional systems.

Other considerations like potential data inconsistencies, increased chance of reading uncommitted data, and potential performance implications should be weighed before using NOLOCK.

Conclusion:

There are benefits and drawbacks to specifying NOLOCK table hint as a result they should not just be included in every T-SQL script without a clear understanding of what they do. Nevertheless, should a decision be made to use NOLOCK table hint, it is recommended that you include the WITH keyword. Using NOLOCK without WITH Statement is deprecated. Always use a COMMIT keyword at the end of the transaction.

Hope this helps…

Cheers,

PMDY