This article was originally published by Ampere Computing.
“You might not be familiar with Amadeus, because it is a B2B company […but] when you search for a flight or a hotel on the Internet, there is a good chance that you are using an Amadeus powered service behind the scenes,” according to Didier Spezia, a Cloud Architect for Amadeus.
Amadeus is a leading global Travel IT company, powering the activities of many actors in the travel industry: airlines, hotel chains, travel agencies, airports, etc. One of Amadeus’ activities is to provide shopping services to search and price flights to travel agencies and companies like Kayak or Expedia. Amadeus also supports more advanced capabilities, such as budget-driven queries and calendar-constrained queries, which require pre-calculating multi-dimensional indexes. Searching for suitable flights with available seats among many airlines is surprisingly difficult.
Getting the optimal solution is considered as a NP-hard problem, so to provide a best-effort answer, Amadeus uses a combination of brute force and graph algorithms and heuristics. It requires large scale, distributed systems and consumes a lot of CPUs, running on thousands of machines today on Amadeus’ premises. To fulfill customer requests, Amadeus operates multiple on-prem facilities worldwide and also runs workloads on multiple cloud service providers.
The Project
A few years ago, Amadeus began a large, multi-year project to migrate most of Amadeus’ on-prem resources to Azure. For this specific use case, Amadeus worked jointly with Microsoft to validate Ampere ARM-based virtual machines (VMs).
During the discussion, Mo Farhat from Microsoft commented:
From our position…[Microsoft] wants to give our customers choice. We’re not driving [them] towards one architecture versus another … or one CPU versus another. We want to provide a menu of options and provide trusted advice …
Initially, as part of the transition, Amadeus was not necessarily interested in introducing a different architecture. According to Spezia:
We only introduce a different architecture because we expect some benefit… We are very interested in the performance/price ratio we can get from Ampere…We want the capability to mix machines with traditional x86 CPUs and machines with Ampere CPUs and run workloads on the CPUs best suited for that workload.
They chose a large, distributed, compute-intensive C++ application as the first one to run on Ampere, as they felt that this would provide the greatest comparative benefit over x86.
We thought ARM-based machines could be a good match, but of course, we needed to validate and confirm our assumptions. We started by running a number of synthetic benchmarks. […] The results were positive, but synthetic benchmarks are not extremely relevant. Since introducing a new CPU architecture in the ecosystem is not neutral, we needed a better guarantee and decided to benchmark with real application code. […] The application is a large C++ code base. It depends on a good number of low level open-source libraries, plus some Amadeus middleware libraries, and finally the functional code itself. A subset of this code has been isolated for the benchmark to run in a testbed.
One of the factors that enabled the project to be successful was the ability for the Amadeus team to obtain Ampere servers early in the project. According to Didier:
To start, Amadeus installed a couple of machines with Ampere Altra CPUs on-prem. They were used for the initial porting work, and still run our CI/CD today. Since we are in the middle of a migration to the public cloud and very much in the hybrid model with a complex ecosystem, we appreciated the flexibility to deploy some machines on-prem, with the same CPU architecture as the VM delivered in Azure by Microsoft. We found it invaluable to use machines running the target architecture for CI/CD and testing, rather than doing cross-compilation,
The application’s CI continues to run on an Ampere server in the Amadeus lab.
Challenges
Porting our code started by recompiling everything using an Arm64 compatible toolchain (Aarch64 target), with implications on our CI/CD.
The porting process of getting this code working on Ampere went very smoothly, although some issues were revealed. Some platform-specific compiler behavior, such as whether the “char” data type is signed or unsigned, was different on x86 and Arm64, and as the application made assumptions about the behavior.
To compile their large C++ code base, Amadeus uses both the GCC and Clang C++ compilers. Among the changes required as part of the port, a number of open-source dependencies required upgrades, to take advantage of improved Arm64 support. Some of those upgrades involved API or behavior changes that required further code changes. In addition, several latent issues in the codebase which had not revealed themselves on x86 were exposed as part of the migration, related to undefined or platform-defined behavior, were exposed and fixed as part of the migration.
Deployment
In the cloud, Amadeus applications are deployed on OpenShift clusters (Red Hat’s Kubernetes-based container platform). To be operated in production, the applications require a full middleware ecosystem (enterprise service bus, logging and monitoring facilities, etc.), which is also hosted in OpenShift.
Amadeus did not want to migrate their entire application infrastructure to Arm64. Red Hat, another trusted partner, has delivered a Kubernetes feature enabling heterogeneous hardware architectures in a single cluster into OpenShift as a supported feature.
Concretely, this means a single OpenShift cluster can include both x86 and ARM Compute nodes. By defining nodesets with both x86 and Arm64 nodes, and using labels and “taints” for containers to be deployed, the developers can easily decide the type of VMs the pods are scheduled on. The supporting components of the Amadeus application infrastructure can therefore run on traditional x86 VMs, while the application pods that Amadeus decides to run for cost and performance reasons on Arm64 can run on Azure Dps v5 VMs powered by Ampere Altra CPUs.
Heterogeneous clusters are instrumental to support an incremental migration and avoid doubling the number of OpenShift clusters to be operated.
Results
Obviously, before moving into production, Amadeus wanted to validate their assumptions with some benchmarking. With the cpubench1a synthetic benchmark, with 32 vCPUs VMs, a single Ampere Altra VM (D32ps_v5) delivered 20% higher raw throughput, and a 50% performance/price improvement over equivalent Intel VMs, and 13% raw throughput and 27% performance/price throughput over equivalent AMD VMs.
When benchmarking with the realistic shopping application benchmark, there was a tradeoff between throughput and response time. The higher the throughput, the more response time was impacted. The Ampere Altra VMs yielded a 47% performance/price improvement, with an acceptable degradation of 11% in mean response time over Intel VMs, and 37% performance/price with a 9% degradation in average response time over AMD.
Amadeus has now ported enough application components to run the real application (not just benchmarks). The company is currently completing integration tests and validating the last bits of the platform. Once done, Amadeus will begin ramping up the production environment in multiple Azure regions.
Built for sustainable cloud computing, Ampere’s first Cloud Native Processors deliver predictable high performance, platform scalability, and power efficiency unprecedented in the industry. We invite you to learn more about our developer efforts, find best practices, insights, and join the conversation at: developer.amperecomputing.com, and community.amperecomputing.com.
Talk to our expert sales team about partnerships or to get more information, or get trial access to Ampere Systems through our Developer Access Programs.
Dave Neary leads the Developer Relations team at Ampere Computing, helping raise awareness and adoption of Ampere Arm64 processors in cloud computing. He previously spent a decade working on open-source infrastructure projects and developer tooling as part of the Red Hat Open Source Program Office. He is also a long-time free software and open source advocate, and contributor to multiple open-source projects over the years. He lives in the Boston area with his family.
Craig Hardy is a Senior Technical Program Manager at Ampere Computing with over 30 years of high-tech experience in finance, operations, marketing, and software ecosystems for the client, data center, and cloud computing. He is energized by simplifying complex issues into straight forward execution steps. Outside of work, Craig spent nearly a decade owning and operating a local bakery. He and his family live in Portland, Oregon.