Dynamics 365 Integration General Guidance | October 2, 2023 - TechTalk

Dynamics 365 Integration General Guidance | October 2, 2023 - TechTalk

Show Video

Hello everyone and welcome to today's Dynamics 365 TechTalk. The topic for today is integration strategy for Dynamics 365. My name is Akshat Singh and I will be your moderator today, along with my colleague Michele. We are broadcasting this session through Teams live event.

Presenting for us today. From Microsoft we have Ali, Corina, Amira from Dynamics FastTrack team. Ali, over to you.

Thank you, Akshat. So welcome everyone to the first TechTalk in a series of four sessions that we will dive into the details of the integration strategy patterns, tools and concepts for Dataverse and Dynamics 365 FNO applications. In this TechTalk, we'll be focusing on the general concepts describing the main available integration options for Dataverse and FNO applications. We comment on the common capabilities and provide high level guidance to effectively approach the integration strategy. We will dedicate the following two appointments in the series to deep dive in the application specific options.

One appointment will focus on dataverse and one will focus on FNO applications. The last session will be dedicated to proving to providing real life examples on how to approach complex scenarios, including cross application integrations and hybrid integration on premises cloud. All the recordings as said, and the presentation will be available after the TechTalk, and we also include an example we provide during those sessions.

Looking at today's agenda, we'll start with a quick introduction, which will include an eagle eye view of the main components of the integration landscape. We'll then move to explain the fundamental principles that should guide integration design. After that, we'll describe the main integration patterns and the factors that should guide patterns choice. Next, we are going to focus on the integration capabilities that are common between dataverse and applications. Moving further, we are going to describe some of the external tools used for third party integration, and we briefly touch on security for integration.

Finally, we'll land some fresh information on our roadmap. We'll use the remaining time for some Q A live. Please also be sure you leverage the Q A for questions and the chat for question during the journey, so there's a lot to unpack.

Let's get started. As you know, there are many reasons why you want to integrate systems in your solution. Examples are multi phased integration, legacy systems remaining live financial consolidation, multi system architecture and Dynamics, Dataverse, Power Platform, Azure all of them provide a vast array of tools for integrating across processes and systems which often make readily available scalable solution available for you. Today, we want to focus on the main set of integration options for Dataverse and D365, and this application provide those to address different business and technical scenarios on a very high level. The integration options can be grouped based on the application they can be used for D365 for funds and operation applications. For example, the DMF, the Data Management Framework or Customer Engagement and Dataverse have dedicated ones like plugins or both the eventing framework.

Another popular categorization is based on the flow direction inbound versus outbound or the decoupling pattern. So if it is a synchronous or asynchronous call, we'll explore the integration options more in detail in the following slides. So in this slide we want to show you all the main options that we have to integrate with Dataverse and D365 using the previous categorization.

So here we group the fundamental endpoints by the platform. They apply to Dataverse and FNO or both, and the color schema indicates if the endpoint is meant to be used in synchronous, non synchronous or near real time, while the arrows indicate if the endpoint is used for inbound or outbound communication. To be clear, in this context, outbound communication means that Dataverse or FNO are initiating the communication, while the inbound means that the external system is contacting the applications. This is true regardless of the direction of the data transfer. So starting from the left we have the finance and operations dedicated endpoints. The main protagonists are the entity based options which include the Data Management framework which is Async Communication and the OData Synchronous endpoints.

We also have the ERP's traditional custom services and the ability to use custom classes and communicate with XML endpoints. Finally, we have specialized endpoints used for dedicated scenarios such as electronic reporting, often used to fulfill regulatory requirements or EDI communication needs, or the punch out feature to utilize worldwide standard integrations for vendor collaboration or Invoice capture solution designed to automate the import and processing for vendor invoices. On the right we have Dataverse dedicated endpoints starting with the OData and Web API, which can be considered both synchronous and Asynchronous depending on the way they are called. And then please notice that we have maintained a clear distinction between OData for Dataverse and FNO platform due to the current implementation. This might change in the future as the two platforms are coming closer and closer.

On the same side. We have Dataverse plugins for Dataverse, both Synchronous and Asynchronous and then Webbooks and the recently added TDs endpoint, which is essentially a SQL endpoint. In the middle we put the endpoints and technologies that are common between FNO and Dataverse. The main one shown is the dual write feature which connects the two words and allows for a seamless integration. This is not a third party integration, but it allows for a complex integration scenario where both sides are involved in the same common ground.

We put the near real time technology of the Eventing framework again available on both sides the virtual tables and virtual entities used for both cross application data visibility and third party integration. Finally, the Synapse Link feature to interact with the world of data lakes Synapse and Microsoft Fabric. Later in the presentation we'll discuss these common capabilities more in depth. Microsoft Office is tightly connected with both Dataverse and finance enterprises, allowing for many form of integration, while Teams is strongly integrated with dataverse only. For now, everything is powered by strong set of underlying technology and services at the moment.

At the bottom you can see Azure Active Directory as the main and often only way for authentication. You can see the preferred standard for implementation in the endpoints which is the rest API and the preferred format for messaging, which would be JSON format. Of course, the last two are much less strict than the authentication as many variations are available.

Moving a layer up, we can see a set of services and solution that sits on top of the main application. These are specialized services that provide additional capabilities and dedicated endpoints to allow for easy consumption of data and seamless business process integration. We won't go too deep into these services as most of them have their own dedicated TechTalks. We will include some of them in the next sessions as they can simplify the approach to specific scenarios.

Please be aware that some of the services, like pricing service are still in preview. On top of all of that, we can identify the main integration tools available for the different applications to interact with the underlying services and endpoints. We'll have a more in depth view of some of the most commonly used in the next slides and we'll include more details in the upcoming sessions as well. It's important to ally that these components are the main way to actually bridge the gap between the Dataverse D365 and the third party applications.

They are also used for more general purpose and they can be leveraged to design companywide integration strategies which are in line with the Dynamics and Datavase integration capabilities. So this is the full picture of the main integration capabilities and tools available for Dataverse and Dynamics 365. Now we want to move to the integration fundamental principles and with this full picture still in mind, we want to quick explore them because they should guide any successful integration strategy.

Now, these principles can be synthesized in five points requirements, existing strategy, monitoring, simplification and scale. Let's explore them one by one. So the first one is Requirements which mandates to start integration strategy from the requirements collected.

Although it can appear trivial many times integration strategies get designed before the full acquisition of the requirements or some of their key aspects. Often existing architectures or the familiarity with some tools are the initial factor and only after a while some aspects of the requirements are truly explored. This can lead to inconsistency and rework, ultimately delaying the project or forcing for suboptimal solutions. When we look at the requirements, we want to focus on first of all business requirements aka what the integration should achieve in the context of a business process. Again, this is much less trivial than it sounds. Then we have technical requirements entity that comes from technical limitation imposed by the third party system or technical characteristics that are embedded in that system or any other component.

For example, a legacy system could be limited to XML format only or plain text only. Also in this context, service limits from Dynamics and Dataverse perspective should be considered. Next, we have performance requirements. This is very important because they determine the volume and frequency of the integration flow, especially in the context of the business requirements.

Without this information, you cannot really effectively design your integration security. Some systems can or cannot adopt specific security protocols or they need to be secured in a dedicated manner because of regulatory requirements or internal policies. Anything, for example, an on premises system could have the limitation to only be relegated to internal network traffic.

Or for example, an external public endpoint could accept only certain type of protocols, protocols or certificates. So next one is existing strategy aka align your integration strategy for Dataverse D365 with a broader company strategy if it already exists. This approach doesn't cancel the previous point, but in the light of the requirements, there are clear advantages in keeping in consideration what is already available and well known in the company or the potential impact of the evolution the project is bringing. Again, we can split the recommendation in multiple points.

First of all, consider and align with the company integration platform, meaning the overall platform used, including services, software, physical device and the existing strategy to use that platform. So with that, consider existing patterns, tools and middleware. In the context of the company platform, special attention should be reserved to the patterns adopt and well known. For example, if the company strategy is based on MN based integration and as well as consolidated midwear tools that are already in place to collect and distribute information, especially the middleware can be a powerful ally to partially reuse consolidated segments of existing flows, then stay aligned with the modern cloud integration approaches. This completes the previous point as the introduction of cloud services.

Such dataverse and Dynamics can have an impact in modernizing the current integration strategy and harmonize the direction of the bigger picture toward a more modern integration approach for the entire organization. And lastly, consider the cost implication. This again seems trivial, but this point has a major implication in the integration strategy and it's often overlooked, especially in the initial phase of the design. Integration involves many components and they all have costs in one form or another, not only licenses, which of course must be evaluated as well. Expensive choices may not be in line with the company's strategy or may need to be implemented with a wider approach to increment the return of investment.

For example, the adoption of a synapse dedicated capacity could be expensive, but the return of investment could be increased adopting the same technology for a variety of systems in the organization. So monitoring is another fundamental aspect of a successful integration strategy. All components should be covered by some form of monitoring and error handling strategy, while being also protected and resilient thanks to high availability and disaster recovery strategies.

In this context, the recommendation is that every critical component must have high availability to allow for loss of nodes without disruption. Disaster recovery strategies should also be implemented for the same reason. Monitoring shouldn't come as an afterthought, but should be an integral part of the strategy from the beginning. Deciding the monitoring strategy could also influence some of the granular decision during the design phase.

Error handling shouldn't be an afterthought either. Moreover, it should be well documented and tested as soon as possible. Unlike critical events, errors are expected in any integration context and the handling should be as smooth and frictionless as possible. Bottom line always, always consider the unhappy path.

And lastly, notification that should also be considered, especially for those components that are not directly in front of the users. Right? Some components may have integrated retry mechanisms and they would masquerade underlying communication problems, so it's always important to receive notification to proactively mitigate risks before they become proper issues. Next up is Simplification or please, please keep it simple.

As I often advise to customers and partners, it's a known fact that integration design can become rather complicated and even messy at times. The idea of keeping the solution as simple as possible should always be one of the main factors in the decision making. Some of the key recommendation here are to leverage the low code no code capabilities that Power Platform and Microsoft Azure provide. This is one of the main advantages in using modern cloud tools to build your integration landscape. Azure and Power Platform provide some of the most powerful local no code capabilities in the entire market. With them, you can build powerful and resilient integration components that don't require long and complex phases of development and coding.

They come with many out of the box connectors and capabilities that are fundamental for a successful implementation. Then define uniform integration patterns. Once you have considered all the previous points, try to avoid using all of the possible different patterns and tools.

If it is reasonable, try to converge to subset to a subset of optimal patterns which can be used for all your flows. This will simplify the design and the build phase as well as the maintenance. Consider using a middleware even if it's not already in the company's landscape. The introduction of a Midware tool or a midweek layer could further simplify the design of the flows and avoid some of what I call excessive Pagatification.

That comes from the overuse of point to point connection strategies plus centralized error handling and notifications. We mentioned that one of the greatest challenges, especially in the complex scenarios, to correctly implement the required error handling and notification systems. All of that can be complex to maintain and monitor, so a centralized approach could help simplify the task. One example is leveraging application insights, which is natively available for many, many different tools. Last step you want to consider how the integration design will be able to scale aka if and how you are building for growth.

So first of all consider the scalability of all critical components and how they react to sudden changes in volume and frequencies of the flows. You want to essentially secure some headroom in your design. Now of course additional requirements may come in the future.

We all hope they will come as it means the organization is expanding and growing. The integration strategy should take into account the effort required to add new flows and new scenarios on top of the existing one. For example, this is the typical scenario in which local new code tends to perform better than hard coded dedicated solutions that can be changed only by well experienced developers. Then of course updates will come in Dataverse and Dynamics Platform and as well as in all the Azure tools or any other tool. Keep in mind the recurring necessity of reviewing the strategy to bring new features just for all deprecated ones.

Of course, the more modern approach you take, the less likely is that you need to early change something that has been deprecated or is becoming old. The integration components should be also covered with application cycle management. We are aware that even in Azure and Power Platform there are significant differences on this topic.

But some sort of ALM is important to keep the changes organized, especially in the long term. Much too often we see the implementation teams losing track of what has been done and why. After some time the integration landscape has been implemented last. Parallelism parallelism is a great way to allow for scalability.

Many tools allow for some form of multithreading or parallel execution and that should be leveraged. This is true for internal endpoints, for example the data management framework or external tools, for example Logic Apps has parallelism in the for each cycles. The correct design of these components could automatically resolve some of the main scalability issues that you may face in the future. With that I'll give the stage to my colleague Corina for the next topic, which is going to be integration patterns. Corina to you.

Thank you very much Ali and hello everyone. Next we will walk through more key factors that you should consider when choosing patterns for your system integration needs. These are complementary to the concepts that were just shared by Ali. You should consider latency. This is a factor that directly impacts user experience, data consistency, system performance and the ability to meet business requirements and obligations. Synchronous integrations happen real time, provide instant feedback and block the correct user action until a response is received.

They have simple error handling but have risks of performance issues and tight coupling. The alternative is of course the asynchronous integration that would happen in the background, not block the current user action and have a delayed response. Synchronous integration patterns and near real time patterns are often mischaracterized and mistakenly used interchangeably near real time.

Is an asynchronous integration suitable for scenarios as well, where data exchange can occur with some small delay and real time response is not mandatory. Next, message routing an important factor. Messages are means by which applications stay in sync on events that are occurring across data points.

We have point to point, which was also referenced by Ali earlier, the simplest, most direct way of connecting two systems. It involves creating a dedicated link or interface between these systems. However, it has drawbacks such as scalability, complexity, coupling issues, and if you need to integrate more than two systems, you will have to create multiple point to peoint connections, which can quickly become unmanageable and hard to modify.

Another option for message routing is using a centralized hub for routing messages between different services. For example, you can use an enterprise service bus or an integration platform as a service as the hub for an integration solution. Frequency a critical factor to consider. This is the rate at which data or events are generated and need to be integrated. High frequency integrations are more suitable for minimal latency patterns, while medium or low frequency integrations are more suitable for batch processing or reporting.

Then triggers what action triggers sending the data from the source to the target? Is it OnDemand user initiated or event driven to keep systems in sync as events occur? Or is it time scheduled for synchronizing data at regular intervals? What type of operation is in the scope of the integration create update delete are typically straightforward and can be accommodated by various integration patterns for read, operations or retrieve. Consider the choice of integration patterns with other key factors like data volume and latency. Next batching should each message or record be processed individually or in batch? Batching messages or data sets enable less frequent communication or chatter, but it also typically makes messages and payloads bigger. Consider if the requirements support a batched approach.

Or does your integration require individual records or roles then volumes it's critical to have a clear picture of the transactional volumes of the data that will go through the interface. Also how that load is distributed over time and what the requirements are in the longer term. Also, you need to know that when you use Dataverse and Dynamics 365 endpoints service protection limits are built in. For FNO, we have resource based service protection limits, and in Dataverse, we have user based service protection limits.

As per protocol, each integration pattern will be compatible with one or more transfer protocols, so it's important to identify these. We have web service protocols that are particularly fitting for integration patterns like point to point of enterprise service bus, and they excel in scenarios where structured real time data exchange is crucial, such as updating customer records or processing, online transactions, file formats and data dictionaries. Consider the nature of the data being exchanged. Is it structured or unstructured, and what are the operations available based on the message or the API schema data Mapping and Metadata Definition consider if you will use a canonical model as a common format intermediary and all of the data exchange will be able to adhere to this format.

Or you will need to use an application specific format where each system understands its own data format. Point to point and ETL often involve application specific mapping, where data is extracted from a source system, transformed, and then loaded into a target system with a format specific to that application. Then, does the data exchange require transformation calculations or remapping as part of the integration? Should the transformation happen within the middleware or the broker itself? Or does it need to happen at the source or target system? If it's the later, then the choice of integration patterns will be influenced by the source and the destination's capabilities. Finally, what kind of error handling and notifications are required? What will be the reconciliation process and the tools we have at our disposal in case of data discrepancies due to integration errors? We recommend you to consider all of these factors combined, of course, and not in isolation.

And now that we've covered the factors to consider and the key concepts when choosing integration patterns, let's bring it all together and review some of these factors or the endpoints exposed by Dataverse and the Dynamics 365 FNO Apps in this table we are listing the endpoints in the first column, second column for which apps these endpoints are available, Dataverse or FNO. Then the direction inbound outbound the latency synchronous asynchronous the operations create, retrieve, update, delete, action batching available. So if the endpoint can run on multiple records volumes for which the endpoint is well suited, error handling all of these endpoints will return errors. What we refer to error handling is if there is logging or embedded errors captured that can be monitored. And finally the last column, the scenarios at a very, very high level for which that endpoint is best suited for. We will review the common apps features such as OData Export to Data, Lake Events virtual Entities in the next chapter for now, I will highlight some of the application specific endpoints, starting with package API and Recurring Integrations for FNO.

They both have a synchronous latency and are well suited for high volumes. You can decide between these two depending on your scheduling transformation and protocol needs. For example, recurring Integration Support Stop and Rest or Package API only rest. Then the SQL, the TDs endpoint that provides read access to dataverse, and it allows running Dataverse SQL queries, which are a subset of transact SQL against its data. While honoring the dataverse security model, it is generally available to be used with Power Bi with the Dataverse connector, and in preview with tools like SQL Server Management Studio. It has documented limits, currently 80 megabytes per query and entitlement limits and service protection limits apply just like you would do a normal web API call.

It is not an endpoint suitable for data integration or data extract then plugins. These are dataverse custom classes compiled into an assembly that can be registered to be executed when certain events within the Dataverse event framework are triggered. Plugins can run on many messages before or after the transaction occurs in synchronous or asynchronous mode, and they essentially provide a Procode method to enrich or update the behavior of the platform, for example, perform validations calculations. Very importantly for our tech talk, notify third party systems of events happening in Dataverse. They can run on events like create, update, delete, associate and much more, and even messages created by custom actions.

It's important to mention here the Azure Aware plugins that can post the current plugin execution context to Azure service, bus, queues or topics, and thus allow these events to be ultimately received by any listener application. Another method for publishing events from Microsoft Dataverse to an external service is to register Webhooks. A Webhook is an HTP based mechanism for publishing events to any Web API based service of your choosing. For example, an excellent way to implement Webhooks is with Azure functions, but again, this is not a requirement. You can use the platform and programming language that makes sense for your use, case, team skills and knowledge.

Next, as anticipated, let's address the common capabilities across Dataverse and Dynamics. 365 FNO Apps starting with dual write, it provides tightly coupled bidirectional integration between finance and operation apps and dataverse. There is a platform component to Dual Write that actually enables the integration and an application component that provides dedicated packages for integration flows like finance, supply chain management, notes, integration, and prospect to cash processes. These packages deliver logic, metadata and very important out of the box table maps for syncing entities between Dataverse and FNO. For example, table maps for customers, quotes, orders, parties, addresses and you can even create your own custom table maps or customize the out of the box ones. Depending on how the dual write table maps are configured, they can work unidirectionally or bi directionally.

There are two dual Write synchronization modes the live Sync and the Initial Sync. The live Sync will happen when the table maps are running. It is synchronous and it executes in one transaction. So for example, if we create an account in Dataverse, an Sno for some result fails during the account creation, an error will be displayed to the user in Dataverse and the transaction will be rolled back, meaning the account will not be created. The second dollar synchronization mode is the initial Sync. This can be used to migrate, reference and master data in an asynchronous manner.

When a table map is started, it is optional and you can skip it if you don't have to sync data between these environments. There are constraints and limitations you should be well aware of before using initial sync as a method for data integration. Next, OData endpoints these are available across Dataverse nfno.

The OData is a protocol standard for building and consuming restful APIs over rich data sources. It can be used across a wide variety of programming languages and platforms that support HTP request and authentication using coauth 2.0. OData endpoints are subject to the service protection limits and they work across both apps, supporting complete Crud querying options, executing functions and actions. OData for FNO will expose all of the data entities that are marked. As is public, it is synchronous and it works well for low to medium volumes.

For Dataverse, we have the Web API endpoint that implements OData and can be used with the SDK for net for server sign logic, as well as the JavaScript for client side logic. It's suitable for low to high volumes and it can execute in synchronous and Asynchronous mode following that Synapse Link integration so both the apps allow for integration with Azure Data Lake, Azure Synapse Analytics and Microsoft Fabric. Let's go through each of these three concepts. First, Data Lake is a storage solution that allows you to process data on demand with enterprise grade security, auditing and support. It is a great landing page for your Dataverse data before being utilized in another service or application.

Azure Synapse Analytics is a limitless analytics service that brings together data integration, enterprise data warehousing and big data analytics. It gives you the freedom to query data on your own terms using either serverless or dedicated resources at scale. Then, Microsoft Fabric is our latest offering for an all in one analytics solution for enterprises that covers everything from data movement to data science, real time analytics, and business intelligence. The foundation of Microsoft Fabric is the want lake or lake house architecture, which is built on top of Azure data lake storage. Microsoft Azure Synapse Link for Dataverse will let you continuously export selected tables from dataverse, including those referring to finance and operations tables that have been enabled as virtual ones and have also had change tracking enabled. The exported data is stored in the Azure Data link in the common data model format and this provides semantic consistency across apps and deployments for the platform.

As a service mode where customers can use their own link with Azure Synapse Link for dataverse, the export runs near real time with around 15 minutes frequency and is built for high volume synchronizations for software. As a service mode. When using Azure Synapse Link with Microsoft Fabric, the frequency is around 1 hour and again it's built for high volume synchronizations. Next Events both apps allow for event based integration and the features available are business events and data events. Business Events are a mechanism to emit events from FNO in dataverse and notify external systems like Power, Automate or Azure Services.

They run asynchronous near real time with a few seconds of delay after the event that triggered them and are suitable for medium volumes. They fit well into a process integration design. They. Are intended to be small and not suitable for data integration scenarios. Example of business events are vendor payment, posted, purchase order confirmed or order creation. For data integration scenarios, you can use Data Events.

These are events that are based on changes to data in finance and operation apps. All of the standard and custom entities in finance and operation apps that are enabled for OData can emit data events. You can activate an event from the Data Event catalog in FNO and associate it with an endpoint. The functionality of Data Events supports a burst rate of 5000 events per five minute period, up to 50,000 events per hour across all of the entities for the environment.

Finally, on this slide, Data Events and business events complement each other, and you can use both of these to deliver on your specific requirements. Finally, in this chapter, virtual Tables. They are a way to access data from external sources in Dataverse and Power Platform without storing or replicating the data in dataverse. You can create virtual tables in dataverse from external data sources like finance and operation apps. SharePoint SQL Server with a very easy, seamless experience. Virtual tables also support OData V four providers that can be used with OData V four web services.

And you can even create your own custom data providers. Virtual entities appear as regular entities in dataverse, and this allows you to create model driven apps, power Automate Flows or Power Bi reports using data from data sources such as Dynamics 365 FNO apps without needing to copy or synchronize this data for Sno environments. With Power Platform integration enabled, the Virtual entities configuration is done automatically in Dataverse, leaving it up to the makers to choose which entities to enable as virtual tables with a simple flag for optimal latency. We always recommend that you have both finance and operation apps and dataverse colocated in the same Azure region. When they are colocated, the virtual entity overhead is expected to be less than 30 milliseconds per call.

And regarding volumes, virtual entities today are suitable for rather low to medium clarity. All right, so Amira, I am handing it over to you to continue with the Azure tools. Thank you very much, everyone. Now let's dive deeper into the Azure

part and explore some commonly used components. As you might know, in Microsoft's Azure ecosystem, there are many tools available to help the Power platform integrate smoothly with external components. In this section, we will provide a brief overview of some of the most commonly used Azure tools, explaining their functions and how to start learning about them.

So let's start with the Azure Service Bus that streamlines Power platform integration. By offering consistent synchronous communication between systems, it enhances data exchange and event driven workflows, ensuring the responsiveness and efficiency of Power apps or Power Automate Flows, for example. And it remains a very important part, as mentioned by my colleague Ali earlier, the second component, highly used for the integration is Azure Logic apps.

They seamlessly integrate with the Power Platform, offering multiple connectors to automate workflows, connect services and trigger actions. Azure Logic apps mainly simplify processes and accelerate the development. And we have also the Power Automate which are like the logic apps streamlines the business process automation using multiple connectors for seamless integration with various application and services. The fourth component we have is the Azure function which are here to enhance the Power Platform integration by enabling custom serverless functions triggered by Power apps or Power Automate events. For example, they streamline data processing and automation extending the capabilities of the Power Platform for an efficient solution. And they are highly used in several integration of the Power Platform with external components.

We have also the Azure Event grids that integrates with the Power Platform enabling easy event capture and response. You can automate workflows and triggers actions within the Power Platform by setting up event subscriptions and configuring triggers and actions. This integration enhances the responsiveness and the functionalities of your solution. And finally, we have the Azure API management that simplifies the Power Platform integration by offering a secure gateway for connecting the Power Platform with external services and data sources. It ensures the data consistency, security and compliance.

For sure, there are much more Azure services that exist and that you can use in the integration. Here we picked the most used ones that we have seen and the most important ones. But we have other tools like Azure keyboard, for example that is present in all of the integrations, et cetera. So let's move now to the security where we will provide a brief overview of key security guidance.

Well, security is a very important topic when coming to the integration and we recommend further exploration of best practices because every scenario is particular and needs to be studied deeper. And we cannot really provide a general rule for all the integration parts with security. So for general guidance, we picked the most important ones. The list is very long and here are some of them. So first of all, we have the authentication that is mainly covered by the Azure Active Directory for Secure Identity and access management. We have the OAuth authentication protocol which is highly used.

The App users that consist on using app user identities for an improved access control management. We have also the managed identities to securely access Azure resources and to avoid the exposure of credentials. And for sure the Azure Key Vault to manage secrets and certifications. So, as you might know, for all the integration the best practice is to store all the secrets and certifications in Azure Key Vault and then pick the values when needed during the integration. In the authorization section we got the Role based Access Control which consists on managing permissions and restricting access within the Power Platform and connected systems based on job responsibilities and least privilege. We have also the external security with the encrypted channels and here you need to make sure that all the communication are over the Https TLS one two to prevent data interception during transition.

We have also the external tools part where you need to make sure that you integrate with only known tools and regularly get the updates when available, especially when they are security updates. And for hybrid scenarios you need to pay attention and use appropriate tools to reduce impact like on the on premise data gateway that will be detailed in our next tech talks. And finally the compliance.

So here you need to ensure the integration compliance with the relevant regulations and standards such as the GDPR which is highly used. Also I'll now hand it over again to my colleague Ali to finish with the roadmap section. Thank you Amira.

So let's get to the Roadmap. We will have a quick look on some of the incoming announcements and changes that we are bringing into the integration landscape in general. These are meant to be for the next months and of course this is not a full complete disclosure but more news will come in the next actually weeks because October is always very of news. So we start with virtual entities for finance operations application specifically. So we are bringing a lot of performance announcements. We hear you.

There has been a lot of feedback about virtual entities performance so we are bringing that in the next months. Also we are going to bring simplified deployment experience meaning an easier way to deploy your virtual entities and also improve we will improve the development experience so those two will go hand in hand to allow you to essentially publish your virtual entities with a lot less complicated setup moving less between the different Dataverse and FNO context. Also the development experience will benefit from our one developer initiatives that we just announced earlier. So with that we move to Synapse Link again, synapse Link has been in Ga for a few weeks actually. We will bring a simplified deployment experience again we hear your comments, we hear your feedbacks and then we act upon that and the experience is not being super easy. As of now we will have now the title integration with Microsoft Fabric.

Microsoft Fabric integration is still in preview as we mentioned. That will also improve as integration and it will eventually go in Ga probably early next year but this is not set in stone. Eventually we will reduce the time to initialization.

So back to performance announcements. Another feed that we received is that initialization takes too long. So we will use parallel initialization and have also that in place another bunch of changes or improvements enabling in the different contexts. In all the contexts.

The incremental folder, the change feeds that you have seen in the export data lake for example and also we will enable FNO table for manage lake which means also for Microsoft fabric and other features like numeric values for enums and things like that. So we are bringing a lot of news on the synapse link side and we will of course announce them and you will see more announcement in the very near future. Do I write? As you know, a synchronous process for Dual Write is going to be a thing and that would be for all maps and we will also bring a lot general quality improvements.

We have received your feedback again, quality, stability, performance are priority for Dual Write. And we are just delivering that coming weeks and months on the part gain sides side, we have new network isolation to connect to private endpoints and private network enable resources in Azure and within the network. Basically it makes easier to connect with the private Azure world and by continuation on Privacy Word. This is again a great feedback you've given us many times. It's also very much requested for FNO.

For now it's not there. We're bringing together the two words. Anyway. So as we mentioned, do a write. Beautiful tables will help you still make this transition if you want to use this feature more extensively. So we will go through a few resources you will receive. Of course you will be able to download these TechTalk slides.

You will have a lot of resources to navigate. I wanted to point out a lot of TechTalks that we have. For most of the things that we have mentioned. There are dedicated tech talks that you can explore and deep dive and some other will come soon. As soon as we start announcing those announcements and those changes in the applications and in the capabilities. With that, I think we can jump into our Q&A session.

We have probably something like five minutes right now. I think we have had already a few questions. Right, Akshat? Yeah. Thank you. Thank you, Ali, Corina, Amira. So, yeah, we can take, let's say a couple of them.

The first question is "We are about to implement an external WMS integrated to FNO. Which integration pattern should we look at?" And considering volumes are high okay, actually I will take this one. So again, volumes are very important, but they are not the only factors.

So in a nutshell, we cannot answer this question. The number of factors that you have to consider are much, much larger. But I won't reiterate the proposal.

Please submit your scenario in our form in our poll and we will look into that and maybe we can share this with a wider community, if you agree with that. Yeah, completely agree. So volume is just one of the factors, but there are many other factors which we have gone through during the start of this Tech talk which we need to consider while designing or choosing an integration pattern. Thank you, Ali. So I'll take the next one.

So, "Will all integration with FNO eventually go through Dataverse?" My understanding is eventually FNO will use Dataverse as its database. Ali, can you take yeah, let me quickly respond on this. So the short answer, we don't have a plan to change those endpoints in the short term or even the midterm to be honest. You can already use integrations with Dataverse to also integrate with FNO.

For example, virtual entities for FNO can be exposed by Dataverse endpoints, so you still can use them. We're not going to remove the FNO endpoints at the moment. Even if our idea is to bring the two words closer and closer, underlying technology is not yet there to make us decide anything like that and deprecation of FNO endpoints will take a long time to come.

Thanks Ali, I think we can take one more. So there's a question. "Can you talk into the new parallel package API capability in FNO?" So I can take this one. So yes, historically there were some intermittent issues while using parallel package APIs. But now if you go to your data management workspace and framework parameters, there's a checkbox to enable enhanced parallel package API.

So you can check that option and that is applicable not only for the package API but also for recurring in Q APIs as well. Okay, so I think we are at top of the R. So we have covered most of the questions is still lot of them will try to answer them offline and see if it is possible to document it somewhere for you.

So yeah, we can now wrap up. So thanks to our presenters one more time and to you our audience for attending the TechTalk today. We hope you have a great rest of the day ahead. Thank you everyone.

2023-10-19 17:33

Show Video

Other news