The cloud in general and Azure in particular, is a big place! There are lots of services of all kinds for all needs: compute, data, messaging, IoT, Machine Learning, and so on. These services are largely categorized into three, often named, cloud service models:
- Infrastructure as a Service (Iaas)
- Platform as a Service (PaaS)
- Software as a service (SasS)
But especially with the advent of serverless, there are two more services that are not mentioned that often:
- Functions as a Service (FaaS)
- Integration Platform as a Service (iPaaS)
This article will focus on the last one (iPaaS) and how to develop such a solution in Azure by looking at the best practices, the tooling available, and pitfalls that can be encountered in a real-world project.
Goal of an iPaaS Solution
The end goal of an iPaaS solution is to automate and streamline business processes that span multiple areas of an enterprise and involve multiple information systems (applications, services, 3rd party SaaS solutions, etc.) that need to exchange information (messages).
Most frequently, the desire is to integrate with the least amount of bespoke code possible and without changing the integrated systems’ code. Being 3rd parties, the latter is not even an option most of the time.
Azure offering for iPaaS
The usual lineup of Azure services used to build an iPaaS solution are the so-called Azure Integration services that are made up of:
- Azure Logic Apps
- Azure Functions
- Azure Service Bus
- Azure Event Grid
- API Management
I will focus more on Azure Logic Apps that usually sits at the core of many integration solutions, being the orchestrators of the integration workflows.
I will not get into the 101 Level details of logic apps as it is not in the scope for this article but it is important to point out that there are two flavors of logic apps: Consumption-based or Multi-tenant and the new Single-tenant that reached GA status end of May 2021.
Editorial Note: A brief overview on Logic Apps can be read at Azure Logic Apps - An Overview.
Single-tenant vs Consumption Logic Apps
The single-tenant logic apps also called Standard (that is how you will find them among the Azure resources), are relatively fresh out of General Availability (GA) and are very similar to consumption-based logic apps. All the basics are pretty much the same so most of the documentation applies to both. However, there are a few differences:
Consumption Logic Apps:
- Contains one workflow per app
- Pay per execution (consumption)
- Fully managed by Azure
Standard Logic Apps:
- One logic app can contain multiple workflows resulting in better performance because of proximity
- Run in the single-tenant Azure Logic Apps runtime which has standard pricing based on multiple pricing tiers
- Is based on the Azure functions Runtime extensibility model so they can theoretically run anywhere an azure function runs
- Have support for VNET and Private Endpoints
- More built-in connectors
- Can opt-in for Stateless workflows that give even more performance
The full set of differences and the nitty-gritty of the two is part of the official docs. But to better understand what are the implications from a day-to-day development and deployment perspective, it is better to have a look at the image below:

Figure 1. Consumption vs Standard Logic Apps
Before moving on, a short but important side note:
API Connections, while they get created automatically when configuring the logic app actions and triggers, they are actually stand-alone Azure objects (resources) that could theoretically get created independently of the application code.
In the consumption logic app, there is only one workflow, and everything is baked in the same ARM template that deploys everything: infrastructure, connections, and application logic in one JSON file. Deploying the app follows the practices and uses the dev-ops tasks for ARM templates deployment. Hence often the logic app itself, its workflow, and ARM template are simply referred to as “logic app”.
In the standard logic app, the application logic (the workflows) is separate from the infrastructure bits and more closely resembles the traditional model of developing any other app, where you write it in a project, maybe compile it, package it, and then deploy it to a previously set up infrastructure (for example a web app service). This has the following important implications:
- You can have multiple workflows in the same logic app instance that can call each other in a performant way
- connections.json file holds metadata about the API Connections. You can think of the data here as pointers to the actual objects in Azure. This file can be parameterized to support deployment to multiple environments.
- The workload of developing logic apps can be split, to assign the ARM template infrastructure parts to people with Azure infrastructure skill set, having its own IAC (infrastructure-as-code) pipeline separate from the ‘application’ CI/CD pipelines.
Now let’s see a few best practices and patterns that I found useful when developing as integration solution.
Best Practices for an iPaaS System
Just like with any other design pattern or best practice out there, you have to consider that they are not laws or hard rules - just something that generally works well in a certain context. Evaluate carefully if it makes sense for your particular use case and if not, just adapt them so that they are helpful instead of adding extra complexity with no real benefit. Don’t force a cube in a round shape.
Use publish-subscribe pattern
Do not use point-to-point integration. Messaging should be the preferred way of integration between source and target systems as well as between the internal components of the integration platform. It is a good choice to have a clear separation between the data producers (publishers) and data consumers (subscribers).

Figure 2. Publish-Subscribe vs Peer-to-Peer Integration
The publisher will take or receive data from the source system and will deliver that to the interested subscribers.
Access from source to target systems can be done via API calls or connectors (that are API calls behind the scenes) but if they support messaging do prefer that option.
There are a few advantages with this setup:
- increased reliability
- accommodate for systems to work at different speeds (load-levelling)
- decouple producers and consumers to allow them to evolve independently of each other with minimal changes on the interface between them
- keeps the design more open-closed: with time, adding another target system that will need the same data of the source system will be a breeze

Figure 3. Pub-Sub with Multiple Consumers
Optimized Pub-Sub with Event Grid
One important fact to remember is that Service Bus based connectors from logic apps are using the pull model. They will poll every few seconds and trigger the logic app if there are messages on the service bus.
To optimize this process and potentially reduce some cost is to use Event Grid as a complement to the service bus.

Figure 4. Service Bus and Event Grid Integration
The important distinction of the event grid is that it is using the push model which means the logic app will be triggered instantly as soon as there are messages on the service bus. Then an action in the logic app can read and process the service bus messages.
Note that the Service Bus Premium plan will be required for event grid integration. What message to listen for and other technical details on the integration can be found in the docs.
Use Logic Apps as Orchestrators, not Heavy Lifters
The producer and consumer logic apps have the potential to grow fast in a production scenario where the complexity and the number of actions can grow quite large for various different reasons driven by functional or non-functional requirements: encryption/decryption, access to the azure key vault, data transformations, instrumentation are just a few. When this happens break the logic app into multiple workflows and delegate the heavy lifting to azure functions.

Figure 5. Logic Apps as Orchestrators
Use an Anti-Corruption Layer at the Borders
Interfacing with multiple 3rd party systems poses the challenge to deal with multiple concepts, multiple terminologies for the same concept, sometimes having the same terminology but for semantically different objects. A good approach is to borrow some concepts from Domain-Driven Design (DDD) and define a Ubiquitous Language for the integration solution and keep it consistent with the help of anti-corruption layers at the ingress and egress points from the iPaaS system.

Figure 6. Anti-Corruption Layers at the edges
The messages received are converted as soon as possible in the ingress ACL and the data is massaged into a structure that follows the ubiquitous language and makes the most sense for the business. It is only converted to something that target systems expect at the latest possible moment in the egress ACL.
Use a Canonical Message Model
Any data object that is received by the integration platform before being pushed through the internal messaging system is wrapped in an envelope that contains technical metadata. The metadata can contain anything you think would be useful for the following possible use cases:
- efficient routing of messages
- filtering of messages at the subscription level
- enabling selective resubmission of failed messages
- any kind of instrumentation (ex: logging)

Figure 7. Canonical Model
Both service bus and event grid support adding this metadata in the form of user-defined properties/attributes.
One important piece of metadata is usually the message or event type. Any message/event should have been assigned an identifier. It is best to use a namespace like naming-convention in the form of:
· [Company Name].[Integration Name].[optional: some other sub-grouping].[Event Name] (ex: MyCompany.ECommerce.OrderCreated)
This will allow different subscribers to filter messages based on wildcards and listen for entire sets of more or less specific events. For example, a subscription could be interested in all events from the ECommerce integration so it will filter based on MyCompany.ECommerce.*.
You could also have meta-attributes like target subscriber code or publisher code which leads us to a more general pattern called message history.
Message History

Figure 8. Message History
While the full details can be found here, the basic idea is that any component that receives a message or event, appends a unique marker in the message meta to have a complete trace from where the message originated, all the way to the end. A particular case of this pattern is when there is only one “messaging hop” so the history contains one element which is the publisher code.
Message Payload Validation
In many systems and applications, validations are usually done at the very beginning of processing, for example, as soon as a request comes. So, it is tempting to do validations early in an integration system as well. And maybe it makes sense in some cases to be like this. If the message is invalid, we reduce some network traffic by not letting it travel the system.
However, in a scenario with multiple subscribers for the same message, maybe the message is not valid for one component but is valid for the others (as each component might look at a different set of properties from the message). If we validate at the beginning, we are reducing the open-closed characteristic of the system.
As such, validations are best to be done at the consumer side to let the consumer decide what message is valid for them or not. Invalid messages can be sent to the dead-letter queue for that particular subscription. This would be a more extensible and more OCP-compliant design.
API Integration
When integrating with external APIs either as input or as output, it is often best to make use of API management (APIM).

Figure 9. API Management Integration
APIM is a good place to externalize cross-cutting concerns like
- handling API security schemes in one place
- do data transformations (ex: XML to JSON)
- protect the integration with rate limits
- provide a static IP for outgoing calls, which is often required by 3rd party systems in an enterprise scenario
In case you expose an HTTP API endpoint for a source system to make an incoming call with data, special attention has to be given to the retry capabilities of that system to make sure it has them otherwise the slightest network glitch can cause loss of data. If the retry capabilities are missing or are not good enough, try to see if you can invert the call direction. So instead of the source pushing data into the iPaaS, make the iPaaS pull the data from the source because logic apps have good retry mechanisms. If this is not an option either, then discuss this with all the stakeholders involved and establish clear responsibility boundaries for each system.
Logging and Instrumentation
Single-tenant logic apps have integration with Azure Application Insights but there are times when you want to write some custom lines in the logs. Unfortunately, there is no direct action/connector to do it. You would have to rely on Azure functions to work around this limitation.
For maximum scalability and performance, you can use what I call the async logger pattern:

Figure 10. Async Logger pattern
This would be the equivalent of _logger.LogError(“some exception message”) in the logic apps world. Event grid is used to make the flow async, fire-and-forget style, and still have confidence that the message is not lost. One thing to note is that the calling workflow will have to set all the message details:
- the log type: info, error, warning, etc.
- the message itself
- and very important: the timestamp of the log entry at the moment it was sent, so that the chronological order of log entries is preserved in case of transient delivery failures of the logging event
Aside from this async logging of simple text messages, many integrations require more complete, clear, and detailed traceability of the messages plus reporting capabilities.
Message Store Pattern
For this purpose, an adaptation of the message store pattern can be used.
The idea is simple: log key message metadata whenever a message crosses the component boundaries in any direction in or out.

Figure 11. Message Store Pattern
To avoid creating too much network traffic and to protect Personal Identifiable Information (PII) data and comply with GDPR regulations only a subset of the entire message will be stored, mostly metadata and key fields only.
The storage can be anything ranging from simple table storage to Cosmos or relational databases.
The schema of the stored entries can be anything that makes sense for your needs but here is an example:

When a component has published something on the service bus, it will also store an entry in the table (ex: first entry above).
When a component receives the message, it will immediately write a new record with Received status and a few moments later will write one with the status of Processed or Dead letter depending on the outcome of the processing.
The Publisher will generate new MessageIDs and Correlation IDs and the subscriber will use the same ones when logging the record on the other side.
Publisher Code and Handler Code are mutually exclusive (they cannot both have value in the same entry in storage).
The Handler will log the message multiple times as needed/required:
- on the first receive
- on processing success
- on processing failure (dead-letter)
Querying this table by DomainID for example will show a complete trail of that record across the integration.
Tooling
In terms of tooling for developing single-tenant logic apps we have only two options at the moment - Visual Studio Code with the Logic Apps Standard extension and the Azure Portal. The consumption based version of the logic apps can be authored with Visual Studio as well but that option is not available for Standard, at least not in the 2019 version.
As mentioned at the beginning of the article, the code for logic apps standard is separate from the infrastructure ARM template So the app itself is nothing more than a collection of JSON files, as can be seen from the docs (omitted some files and folders for brevity):
MyLogicApp
| WorkflowName1
|| workflow.json
|| ...
| WorkflowName2
|| workflow.json
|| ...
| .funcignore
| connections.json
| host.json
| local.settings.json
So technically they can be edited from Visual Studio as well but there is no additional help from the IDE. Editing the logic apps in VS Code is usually done via the provided workflow designer and technically it can be run locally.
Tips & Tricks
There is sort of a saying: in theory, the practice should be the same (as the theory) but in practice, it is not. While developing real-world logic apps you might hit a few bumps along the way. Here are a few of them and possible solutions or workarounds. Note that these issues were encountered at the end of November 2021, so depending on when you read this article these things may or may not have changed.
Use the right version of the runtime
Standard logic apps run on top of the azure functions runtime. You need to have the right tooling installed and working properly in VS Code, especially Azure functions Core Tools. What I found out the hard way is that the workflow designer for logic apps does not properly support Azure Functions Core Tools version 4.0. The designer will just take a long time to start and then it will fail with a cryptic error message. It just won’t start. This means is pretty hard to develop azure functions on .NET 6 and logic apps at the same time.
The solution was to revert to developing azure functions on .NET 5 that runs on the 3.x version of core tools and runtime host. Once you have core tools v 3.0.x and functions runtime 3.x the designer will start with no problems.
Portal is king but can’t live without VS Code
While you can develop standard logic apps locally, I found the experience suboptimal, the tooling is clunky, feels buggy, and is just not friendly to use. For example, in an HTTP triggered function, to get the calling URL you have to:
- Save the workflow
- Run the function with F5
- Right-click on the workflow.json file and select Overview (see here)
A bit too much for too little.
If you have an error in the workflow two things can happen: either when you save, the designer shows a blank page or it shows the workflow, but it fails at runtime and you are not presented with any meaningful error message. The “compile-time experience” (notice the quotes) is quite poor I would say.
If you make multiple changes and you save them as a batch, and it fails, you have very little to go on to debugging the problem. My advice is to make small changes and save often.
Also, not all changes are possible by the workflow designer. If you want to rename an action you have to do it carefully by changing all references in the workflow.json file without any automated static checks. I know is it a JSON eventually but I had higher expectations from the tooling.
Overall my experience was that it was easier and nicer to author the workflows in the portal and then copy-pasting the JSON code back in VS Code to be stored in source control. Not only that but you also need to make sure the logic app is deployable in multiple environments like DEV, QA, PROD. This leads us to…
Productionize-ing the Logic App
Because in any real-world project you have to store the logic app code in source control and to be able to deploy the same code to multiple environments, you can’t limit yourself to Azure Portal editor. Usually, any 3rd party system you are integrating would have a separate tenant for each of these environments so the logic app would have to use the proper connection objects that are pointing to the right tenant. This means the same code in connection.json file will have to be parameterized to point to the right objects based on the environment.
That can be done by referencing app settings variables from the connections.json file. In the local environment, the app settings take the values from the local.settings.json file but on Azure, they take them from the environment variables.

Figure 12. App Settings in Azure portal
The way to set these values is either manually or from the YAML CI/CD pipelines using the appSettings property:
- task: AzureFunctionApp@1
displayName: 'Deploy Logic App'
inputs:
azureSubscription: '$(AzureSubscription)'
appType: 'functionApp'
appName: '$(LogicAppName)'
package: '$(System.ArtifactsDirectory)/$(ArtifactName)/$(PackageName).zip'
deploymentMethod: zipDeploy
appSettings: -WORKFLOWS_SUBSCRIPTION_ID $(variable-defined-in-pipelines-library)
However, here is the first catch! Once you parameterize connection.json the local designer will no longer work. Fortunately, this is a known issue at Microsoft at the moment but I am not sure when it’s getting fixed.
The solution I found was to use multiple connection files:
· connections.json that looks like the one showns in Figure 13 and is changed automatically by the designer when configuring connection from the workflow’s actions and triggers

Figure 13.Connection metadata that is not parameterized
· connections.params.json that you have to parameterize manually. As you can see in Figure 14, there are multiple pieces that have to be extracted into app settings variables using the special @appsetting(‘key name’) function recognized by the runtime.

Figure 14. Parameterized connection metadata
This leads us to catch number two!
Configuration Err…bugs
Whenever you encounter the following problems:
- when calling the logic app URL trigger you get a 404 from Postman
- when you run the trigger from the azure portal directly and the trigger fails
- when you get any weird errors along the lines of “Trigger not found or missing” or anything that does not make sense
…then you probably have a configuration related error:
- either you are missing an environment variable (app setting) in Azure, in the Configuration tab of the logic app
- or the value from the app setting is incorrect
- or the connection object pointed to by the app settings is not in a ‘Connected’ state to external service
- or some error in the parameters.json file (see details here)
- or, here comes the crazy part, you are simply following the docs from here and using @{appsetting(‘some var’)} (notice the curly braces):

Figure 15. Official docs on parameterizing connections
For some reason if you use the curly braces ({}) the app will save just fine, nothing will crash or error out on save, but it will fail at runtime as mentioned above.
Using @appsetting() without braces fixes the problem. There is either a bug in the docs or the logic app standard implementation in azure.
Wrap Up
As enterprises scale and grow, they will need more and more automation in their business processes and they will rely even more on iPaaS solutions that Azure Integrations Services can deliver. A group of services where logic apps belong to, next to Service Bus, Event Grid, APIM, and Functions.
Very recently Azure-based solutions for integrations have entered the leader category for enterprise iPaaS in Forrester Wave classification.

Figure 16. Forrester Wave radar
So hopefully the patterns shown above will give you a good head-start in your first or next encounter with an iPaaS system.
Logic Apps are categorized as a “no-code” or “low-code” solution to integration use cases and they have a huge potential to save time especially because of the hundreds of connectors that they have with different 3rd party systems or even other Azure services. With single-tenant (standard) flavor they start to become friendlier to developers and they start to migrate more toward the “application” development than “infrastructure” development.
Unfortunately, the current tooling and various little shortcomings here and there (as described above) that you need to work around, have the potential to reduce or negate the speed of a low-code solution.
Hopefully, this article succeeded in pointing (some of) these imperfections so that you can avoid them and have a much smoother and nicer experience developing integration solutions in the future!
This article has been editorially reviewed by Suprotim Agarwal.
C# and .NET have been around for a very long time, but their constant growth means there’s always more to learn.
We at DotNetCurry are very excited to announce The Absolutely Awesome Book on C# and .NET. This is a 500 pages concise technical eBook available in PDF, ePub (iPad), and Mobi (Kindle).
Organized around concepts, this Book aims to provide a concise, yet solid foundation in C# and .NET, covering C# 6.0, C# 7.0 and .NET Core, with chapters on the latest .NET Core 3.0, .NET Standard and C# 8.0 (final release) too. Use these concepts to deepen your existing knowledge of C# and .NET, to have a solid grasp of the latest in C# and .NET OR to crack your next .NET Interview.
Click here to Explore the Table of Contents or Download Sample Chapters!
Was this article worth reading? Share it with fellow developers too. Thanks!
Liviu is a passionate developer and technical architect with over 15 years of experience in software development on the .NET stack and Azure, mostly encompassing web technologies. As a true craftsman, he is always fighting to keep business requirements, pragmatism and perfect design in balance. While interested in any software design and architectural topic, his current focus is on designing and building Microservices systems on the Azure Cloud for companies worldwide.