The Service Oriented Architecture (SOA) pattern is commonly found in the enterprise. SOA is an approach where components (services) communicate with each other over the network. Each service focuses on a specific goal. For example, an SOA ecommerce store may have a service for credit card processing, another inventory checking, another for user management, and another for the web interface.
A common network protocol for connectivity between these services is the message queue. For example, the web interface service would place a message on the queue for the credit card processor service requesting that a card be charged, then wait for a response from that service on a reply queue. This asynchronous request/reply pattern is one common message queue pattern. Other patterns include publish/subscribe and work queues.
Because SOA systems are composed of so many components, and they tend to be large, complex, legacy systems, migrating them to the cloud can be difficult. There are a few migration strategies to choose from for such a project. “Lift and shift” can work, where all of the existing software is shifted, unmodified, to the cloud. However, that still requires a large cutover where everything is on legacy infrastructure one moment then cut over to the cloud at the next moment. Such “big bang” approaches are risky and tough to manage. Another approach is to iteratively migrate each service one at a time, maintaining a hybrid legacy/cloud solution until the final service is moved to the cloud. This approach reduces the overall risk as at any given time, only one service could be broken due to the migration as opposed to the risk of total system failure for the big bang approach. It’s also easier to manage as the work is more spread out.
An Iterative, Cloud Native Approach
When migrating the SOA system one service at a time, the greatest challenge is keeping the services that are hosted by different providers connected. During the migration, some will be hosted at the legacy data center, and some will be hosted in the cloud. This migration is also a great opportunity to modernize the services as they’re migrated, such as by modifying them to use cloud native queue offerings instead of legacy ones. The solution is to build an adapter between the legacy queue system and the cloud native one.
In the case of a recent migration, the system used IBM MQ and was being migrated to use the cloud native Amazon Web Services (AWS) Simple Queue Service (SQS). At a high level, the migration plan was:
- Create a connection between the legacy data center and AWS (using a VPN and/or AWS Direct Connect)
- Create an adapter that lives in AWS to proxy messages between IBM MQ and AWS SQS
- For each service, one at a time:
- Modify the service to use AWS SQS instead of MQ
- Deploy the now cloud native service to AWS
- Shut down the legacy service
- After all services have been migrated, shut down the connection between the legacy data center and AWS
- Finally, decommission the legacy data center
The Cloud to Legacy Message Queue Adapter
This adapter must proxy messages in both directions between the legacy software (IBM MQ) and the cloud native one (AWS SQS). In SOA, there is no standardized message format or protocol, so using an off the shelf product for this adapter is not likely possible. However, building such software isn’t that difficult given the right building blocks, and the free and open source Apache Camel suite provides some great building blocks. If a version of Camel with Enterprise support is required, Red Hat Fuse is also a great option.
In the case of this migration, the adapter was built using Red Hat Fuse. This application consists of a number of routes connecting IBM MQ queues (using Camel JMS) to SQS queue (using Camel AWS-SQS). Each route modifies message headers appropriately as required by the conventions of those queues. Understanding those queue-specific conventions was the primary challenge of building the adapter.
For example, for one legacy queue, applications relied heavily on the JMS correlation ID to related messages to each other. Therefore, the adapter had to copy the JMS correlation ID to an SQS header, and the applications that use this queue needed to check this header for the correlation ID.
Modifying Each Service for Cloud Native Queues
The formula for modifying each service for cloud native queues is to remove the legacy queue dependency, add the new cloud native queue dependency, then fix the compilation errors and update the logic. For the specific example of this migration, that meant removing the IBM MQ maven dependency, adding the SQS dependencies, and fixing the compilation problems and updating the logic.
One of the decisions to be made is deciding which library to use to interface with the cloud native queue. In this case, the choice was between two options: use the Amazon SQS Java Messaging Library or use Camel AWS-SQS. Both options could be used successfully; the choice comes down to the team’s comfort level with the software and the specifics of the service. For some services, Amazon SQS Java Messaging Library was used due to its simplicity. For other services, Camel AWS-SQS was used because there were other integrations requiring Camel, and the team was already familiar with it.
Finally, the service’s logic has to be updated. For example, the JMS correlation ID is handled differently in SQS than in IBM MQ, necessitating changes to how messages are correlated. Ideally, these changes will apply to many services, so a solution can be agreed upon globally and applied the same way for all services, reducing complexity going forward.
In this case, the whole SOA enterprise is composed of dozens of services spread across as many teams – migrating all of that software at once is simply unfeasible. By migrating each service iteratively, we reduced risk. The Enterprise also realized incremental cost savings; as each service was migrated, its data center could be decommissioned.
This cloud migration will be going on for a long time yet. With an established pattern and a track record for success, full cloud native migration is inevitable. And the Enterprise is very much looking forward to its more efficient, more reliable, cloud native future.