Designing system integrations

Lucian Davitoiu
5 min readMay 23, 2021

--

Today organizations are having a large number of applications from different vendors or built in-house. They’re often maintained by different IT teams and serving different business functions. The need for their integration has been a constant challenge.

To meet this challenge various products have been built and books written. One of the most enduring source of wisdom in this area comes from Gregor Hohpe’s collection of enterprise integration patterns:
https://www.enterpriseintegrationpatterns.com/gregor.html

In this post I’m looking at designing such applications (aka middleware, hubs, enterprise service bus) that integrate other applications. As any software design it must ensure that requirements are met by application features and divided over components. What’s specific to EAI domain is that many requirements come from the applications to be integrated. There is normally less visibility from the business users on this kind of application and their design.

Systems and requirements

Integration solutions may well be regarded as “plumbing” and this is a rather suitable term considering the most prevalent feature is that of data pipelines between other applications, but it’s also misleading as the logic built into the data flows can be rather complex. This is especially the case when organizations choose to deploy vanilla versions of of-the-shelf products. The complexity (bespoke functionality) moves somewhere else.

In order to talk about system integration you need at least a provider and a consumer system, so requirements are centered around these. Additional requirements vary, but are frequently around message visibility and reliability, features often available on mature integration products, but not always sufficient.

Fit within IT estate

An organization’s IT strategy has a large impact on integration designs. It should be weighted accordingly not so much as to ensure technical interoperability (an integration solution is by definition meant to mediate between different protocols), but more for ease of ongoing operation and change.

The technology employed by an integration design needs to be aligned with organization’s tools, procedures and skillset. Organization that prefer buy vs build don’t have this option when it comes to system integration because this is naturally bespoke to each IT estate.

As customized application are harder to replace, it’s important that IT teams can support the deployment and operation of the entire technology stack that comes with an integration design.

Organization structure

Conway’s law emphasizes enterprise software design bias towards mirroring the hierarchical structure of an organization. This is a strong force especially in system integrations. It requires a good analysis in order to make best use of it, work around complexity and avoid rigid data flows.

In extreme cases, this force can lean to inefficient processes, duplicated modules, high costs and brittle architectures. While there is no general antidote, it usually helps to get sponsors that have the bigger picture of an IT estate. It’s usually this high level view that drives the best technology and investment decisions.

Estimation

A high level design should be a prerequisite for a reliable estimate. This should break down the solution into factors like:

• Source/target endpoints and protocols used (e.g. HTTP, JMS, CIFS shares)
• Message structures and formats (e.g. XML, flat-files)
• Data transformations and enrichment
• Workflows, specific logic that must be enforced on data flows
• Internal persistence components (as apposed to external DB access)
• Non functional aspects like in transit/at rest security, fault tolerance and performance

While these cover a significant portions of estimation factors, it’s not an exhaustive list and each project may have specific ones.

Next we’ll look a bit closer at some of the components that make up an system-to-system interface and some common concerns.

Data Formats

In an enterprise data is exchanged in a plethora of older and newer formats. The complexity of the data structure can vary from simple tabular CSV structures to intricate nested XML trees. The level of nesting and number of columns will be a first clue as to the overall complexity of an interface.

Here it’s important to appreciate how well these structures are defined and how easy is for upstream and downstream parties (i.e. providers and consumers) to agree on them. This is where standards are a great (cost) saver.

Transformations

Transformation is simply taking one or more source messages and producing new target one(s). The complexity lies in how closely aligned the sources and targets are. Many tools can auto-generate the mappings for “1-to-1-field” cases. At the other extreme determining the target requires cross-referencing multiple messages or even external data sources. Such complexity should be encapsulated in dedicated components that can be better built and tested in isolation.

Reliability

Fault tolerance, data consistency and availability are key elements of a robust design. CAP theorem explains that in case of network faults, aka partitions (P), one has to choose between consistency (C) and availability (A) guarantees.

The requirements shall justify the trade-off, but typically in system integrations case preference is given to consistency. This is largely because at each stage messages must be reconciled between sender, receiver and EAI application. No messages can be lost or duplicated. This can be achieved via distributed transactions when data stores support them or compensation within error handling where they don’t.

If preference is given to availability, then messages may be lost or duplicated, hence end-to-end consistency is sacrificed. However, in this case messages can be read (or processed) locally by a single system. Moreover, on communication paths unaffected by the partition messages can still flow and data pipeline may work acceptably, albeit without the consistency guarantee.

Reliability depends on infrastructure (servers, network, security), integration frameworks (e.g. platforms like BizTalk, Informatica or libraries like Camel) and the actual application design. Each layer has a part to play and should be resilient. Each design and implementation greatly depend on (and are coupled to) the underlying technology used.

Testing will flag the weak components. While each component with dependencies should provide defensive fault handling, main focus should be improving the reliability of the faulty component or finding a replacement.

Persistence

Systems get integrated to share data and link processes. In order to support the process integrity, data is persisted at different stages as it flows from system to system. At their core, these applications are data centric and must consider the following aspects:

  • Message tracking and reconciliation
  • Message volume and size
  • Preferred/legacy data stores (e.g. RDBMS, NoSQL)
  • Preferred/legacy message transfer (e.g. MQ, Kafka)

--

--