SAP Logo LeanIX is now part of SAP

Evaluating Integration Architecture Patterns Across Use Cases

Posted by Matthew Grant on September 30, 2022
Evaluating Integration Architecture Patterns Across Use Cases

LeanIX research has found that enterprises earning €1+ billion in annual revenue have an average of 650 applications deployed at any given time. For the very largest companies (top 10%), that number climbs to a staggering 3400 applications.

It’s no surprise, then, that integration architecture has become a critical priority for enterprises seeking to manage their IT landscapes, strategically integrate applications with deliver services, and continuously track data flows. But it's also a complex process — one that requires integration architects (IAs) to analyze which patterns are best suited for each individual use case.

That’s what Donovon Simpson, discussed in his talk at the LeanIX Connect Summit in Boston. (Donovon held the session as OneMain Financial’s Lead Software Engineer at the time but has since joined the LeanIX team and we're more than excited to have him join us!)

evaluating-integration-architecture-patterns-across-use-cases-2

Donovon Simpson speaking at LeanIX Connect Summit in Boston

In the sections that follow, we’ll recap some of the most important takeaways from Donovon’s talk, including the three types of integration patterns to consider, their pros and cons, and an overview of the decision tree OneMain Financial uses to guide their engineers in making the right IA implementation decisions.

Quick Takeaways

  • The HTTP REST integration is a good option for real-time and synchronous traffic scenarios. It’s universally compatible and inherently built to handle large payloads.
  • Messaging/queue and event-driven integrations work well in asynchronous scenarios. Both have high message delivery reliability but limitations on message size and scaling.
  • Using a framework like OneMain Financial’s Decision Tree encourages innovation from developers while providing standardized decision-making criteria across IA use cases.

3 Integration Patterns

HTTP REST Integrations

The HTTP REST integration pattern is one of the most common and can be considered basic: The consumer sends a request to the producer, and the producer sends a response back. The response is built into the initial request itself.

This pattern is a good option for real-time and synchronous traffic. Because there is a built-in response, the consumer can immediately get whatever it is they’re asking to receive. It also has great streaming capabilities; HTTP protocol has a built-in capability to handle large payloads.

HTTP is also nearly universally compatible — it’s likely to work seamlessly with the hundreds (or thousands) of technologies an enterprise uses.

On the flip side, HTTP REST patterns used in asynchronous scenarios trigger unneeded responses from the producer indicating that the request will be processed. Polling issues also exist; if the consumer is waiting for something to be ready, they have to send a new request whenever they want a status update.

Messaging/Queue Integrations

In messaging/queue integrations, the producer puts a message onto a queue for the consumer to dynamically pull. Messages can be processed one at a time, or multiple consumers can pull them in parallel. Producers can also configure queues to be persistent so that messages don’t get lost if the queue goes down for some reason — this model maintains failsafe deployment.

Unlike HTTP REST, this pattern works well in asynchronous scenarios. Once the producer drops a message onto the queue, it no longer worries about when the consumer will process it (note: some messages may need to expire after a certain period, so it’s important to look at your own use cases).

Message delivery is reliable in a queue because messages can be stored as needed. No polling is required in this pattern, either — code can be automatically triggered once a message is available. Message queues can manage traffic spikes and surges by smoothing it out over time, giving insight into the cost of using a given consumer as you process a certain number of messages at a steady rate over time.

This pattern isn’t a great fit for real-time and synchronous traffic because there is no built-in response. Message tracing can also get messy if you have a lot of queues open and/or multiple applications using multiple queuing servers. You’ll need to use message IDs to map interactions.

Two final potential drawbacks to this pattern are size and scaling limitations. If, for example, a producer is trying to share hundreds of messages that include files, at some point you may hit the server limit. Scaling may become difficult if producers are dropping messages faster than the queue can handle without clustering.

Event-driven Integrations

Event-driven integration is newer and getting no shortage of buzz at the moment — you may have heard it’s the “right way” to handle integration architecture and considered whether you should be transitioning more completely to this pattern.

This integration pattern shares similarities with the message queue pattern but acts more like a log file. Producers write messages to a persistent queue, where they are stored along with all other messages previously sent. Each consumer has a cursor in the file that indicates where they’ve read up to. If they need to, consumers can override and reprocess messages; in fact, all consumers can easily reprocess the same message over and over again.

This pattern is best for asynchronous traffic since consumers can read messages at their own pace on the log. The consumer doesn’t even need to know the producer — just the message format. The log makes message delivery reliable and eliminates the need for polling in order to get status updates.

Another significant advantage to this pattern is multi-consumer processing, where the producer posts a single message and all relevant consumer applications can read it from the same source.

Like the messaging/queue pattern, real-time and synchronous use cases aren’t a great fit for event-driven integrations. Message tracing can be difficult here as well, and message size limits apply.

Lastly, if you require exactly-once processing and/or you need messages to be read in the exact order they’re sent, this pattern is not the best choice.

OneMain Financial’s Integration Decision Tree

While patterns may advertise themselves as the only one you need — a one-size-fits-all kind of solution — this simply just isn’t the case for most companies. When dealing with the large-scale tech stacks that most enterprises maintain, IT leaders must look at which combination of IA patterns best suit their use cases.

OneMain Financial uses an “integration decision tree” to accomplish this. They don’t want to use only one or even two of the above-outlined patterns — they want to be able to evaluate the pros and cons of each, and leverage all of them based on what's needed where.

The first question on the tree for develops: Is this purely a read operation? If so, HTTP REST is probably best. If not, developers move on to a series of questions that help them make a decision based on standardized criteria. The power of the decision tree lies in that it encourages innovation while also providing a guiding framework that maintains consistency across the enterprise.

The Takeaway

Integration architecture, like most other aspects of managing enterprise IT, has become more complex over time and requires dedicated focus to execute successfully. Sticking to a single integration pattern may be perceived as the easier route but limits your ability to maximize ROI across use cases.

By using a combination of integration patterns and making decisions rooted in predetermined criteria, enterprises can balance standardization and innovation successfully.

If you would like to watch Donovon’s talk in its entirety, you can do so here. [Free registration required].

Subscribe to the LeanIX Blog and never miss a post again!

Related Posts