Cloud has been in existence since 2006 when Amazon Web Service (AWS1) first announced its cloud services for enterprise customers. Two years later, Google launched App Engine, followed by Alibaba and Microsoft’s Azure services. The most recent addition to the public cloud service provider’s list is Oracle.
As per the Gartner 2021 Magic Quadrant, AWS is the market leader followed by Microsoft Azure and Google Cloud Platform in the second and third positions respectively. As Cloud technology evolves, so are the customer requirements. Today, Cloud adoption is one of the top priorities of the CXO suite. The Covid-19 pandemic further accelerated the need for cloud adoption as digitization is no longer optional for organizations but a mandatory requirement. As the pandemic nears its end, there is a surge in demand for cloud services as most enterprises find it the right time for leveraging it. As a result, enterprises don’t spend enough time on the “right” workload assessment. There is a possibility that enterprises might get impacted due to this sudden move to the Cloud and may have to eventually exit or switch to another Hyperscaler at a later stage.
As per Gartner’s report, 81% of the respondents said they currently work with two or more public cloud providers. It means multi-cloud is the future of Cloud Computing.
Let us look at the other common factors or reasons (Figure-1) for enterprises to consider adopting a multi-cloud strategy.
Figure-1: Factors leading an enterprise for a multi-cloud strategy
- Regional Presence – Most well-known Hyperscalers have extended their global reach to tap new markets, meet existing customer demands and adhere to regulatory/compliance requirements. As of December 2021, Azure has a presence in 23 regions, AWS in 26 regions, and GCP in 29 regions. Regional presence has a strong impact as enterprises would prefer being closer to their customers, abide by the compliance requirements defined by their country and offer high performant services with low latency. An enterprise already using the cloud services may want to switch and/or include another cloud provider in their customer’s region.
- Best-of-Breed Services – The major Hyperscalers offer a huge portfolio of services across infrastructure, platform, data services and AI/ML. Yet, some cloud service providers enjoy market leadership for specific services. For general infrastructure AWS is the go-to vendor. Similarly, large enterprise using Microsoft tools and technologies, prefer Azure as they can leverage the licensing model and ease of integration. When it comes to AI/ML/Data, GCP is preferred since Google is a data company.
- Cost Optimization – Cost optimization is always a top priority for a CFO. Enterprises are constantly exploring different options to reduce their operating expenses. Enterprises expect the Hyperscalers to recommend options to reduce cost, display granular usage and provide a cost-service breakdown. Tools/platforms like CloudCheckr, CoreStack (FinOps), Flexera CMP, etc. offer recommendations and insights for cost optimization. The tools are advanced ML-based tools that use past data to recommend the next course of action. Cost optimization plays a vital role in deciding the multi-cloud strategy.
- Vendor Lock-in – Enterprises are often sensitive to vendor platform lock-in. It is another factor that enterprises consider while evaluating a cloud service provider. Enterprises may opt for a multi-cloud strategy and avoid getting locked in a specific vendor environment leading. The trend is to use generic services from one Hyperscaler and specialized services from another vendor. Also, this approach safeguards enterprises against situations like vendor monopoly, vendor insolvency etc.
- KPI & SLA – Enterprises want some measurable parameters to evaluate their cloud partners, measure the project progress and its impact on their business. Key Performance Indicator (KPI) and Service Level Agreement (SLA) are the two crucial parameters for assessing the Hyperscaler’s service outcomes.
- Competition – When an enterprise fears that their customer data may be compromised, impacting their business, they may consider a vendor switch. Though the Hyperscalers must follow guidelines regarding customer data protection, some enterprises still prefer to be over-cautious. For example, a big retail chain switched to a multi-cloud environment to avoid ‘conflict of business’ with another vendor.
Let’s now discuss the factors that enterprises must evaluate when selecting a Hyperscaler.
Figure-2: Factors that enterprises must evaluate before choosing a Hyperscaler
- Regional Presence – Enterprises must consider if the Hyperscaler has a local presence. They must also meet all the regulatory and compliance requirements. Additionally, enterprises must perform a small proof of concept if switching due to latency related reasons. Besides, they must evaluate the connectivity plan via Point of Presence (PoP) or through Partner Channel. Check for partners with regional presence and the type of connectivity options available through them. Do they support Azure Express Route, GCP Cloud Interconnect and AWS Direct Connect?
- Service Portfolio – Enterprises must also look beyond regional presence and network connectivity. Validate the different services that are available with the new Hyperscaler. Ensure the Hyperscaler has all the necessary service options in its portfolio. Likewise, evaluate the services for proper functionality, limitations, resource limit, etc. Enterprises should also investigate the different SLAs for these services before choosing them. For a given vendor, all services may not be available in all the regions. Review the vendor’s roadmap and ensure that the required services will be available before the switch-over.
- Vendor Credibility – Validating the Hyperscaler’s credibility is a crucial step while evaluating them. Enterprises can make use of third-party services to ensure vendor credibility. Industry analysts like Gartner, IDC, Forrester, etc. regularly publish vendor-oriented reports. Look out for their evaluation about the Hyperscaler in Magic Quadrant, Forrester Wave, etc. The Hyperscaler must have a long-term strategy, plan, and roadmap.
- Environment Stability – The challenges of the current environment must not repeat in the new environment. Hence, enterprises must evaluate using the same type of workload or by conducting a proof-of-concept in the new environment. This may require running the workload in the new setup for a specific duration and closely monitoring it. Try simulating the same use case and monitor the application behavior by setting up alerts. Gradually increase the use-case traffic and monitor the application behavior. Ensure the same problem/issue does not show up.
- Support Model – Evaluate the benefits of each support model available through the Hyperscaler. Go for the one that best suits the enterprise’s requirements. Check different SLAs, KPIs, monthly/quarterly reports etc.
- Migration Tools/Services – Another important factor is if the Hyperscaler has the necessary tools/products/services required for this type of switch-over. Check if the new Hyperscaler provides any tools or services for workload, database, and data migration to their environment.
For example, every Hyperscaler has a set of tools available for workload migration, database migration, data migration, data transformation, etc. AWS provides Application Migration Services for workload migration, AWS Database Migration Service for database migration, AWS DataSync for data migration from on-premise to AWS. Similarly Google Cloud Platform has tools to make the data and workload migration very seamless – Migrate for Compute Engine for workload migration from On-Premise to GCP, AWS/Azure to GCP (Hyperscaler to another Hyperscaler), Migrate for Anthos for workload transformation from GCE to GKE, AWS EC2/Azure VM to GKE (one Hyperscaler to another Hyperscaler) or Storage Transfer Service for Cloud, etc. Likewise, Azure has Azure Migrate for workload migration, Azure Database Migration Service for databases, etc.
- Competitive Pricing – New Hyperscaler should also be evaluated based on the pricing model, available discount options, multi-year commitment, availability of free-tier usage, etc. However, competitive pricing alone cannot be the deciding factor if other factors are the priority.
- Skills & Expert Availability – An enterprise must have or engage an expert who can help the team and guide them during this switch-over journey. In addition, define a path for the internal teams to learn new skills and get certified
As the public cloud offerings and services expand, enterprises have multiple options available at their disposal. They can decide and pick up the most suitable Hyperscaler/suitable for their workloads. Workload mobility across clouds will be a general pattern based on the service cost, application latency and/or need for additional resources, etc. Though it may not be ideal for critical production-grade workloads/applications with regulatory and compliance requirements, it is most suitable for other workloads like product testing, scalability testing, code development, etc. which caters to around 30%-40% of the workloads. Such workloads can make use of this capability to achieve cost optimization.
Earlier, due to the limited number of cloud service providers, enterprises had to worry about service outages, vendor lock-in, delay in problem resolution, vendor insolvency, etc. But with the blooming Hyperscaler ecosystem, enterprises are flooded with choices. This leads to another challenge of effectively managing a multi-cloud setup. However, enterprises can use multi-cloud management solutions from vendors like IBM (Cloud Pak), Micro Focus (Hybrid Cloud Management X), Flexera (Cloud Management Platform), Scalr, ServiceNow (ITOM Cloud Management), etc. to ensure seamless operations.
A multi-cloud strategy also demands well-defined governance, otherwise, it may increase the operating costs due to ignorant individuals or poor control mechanisms. An inefficient control mechanism can lead to underutilized resources, consuming money in the Cloud. It is recommended to set up a central body responsible for managing the Cloud resources and ensuring proper governance. Creating a self-service portal with proper workflow is a good approach to manage the cost and handle mismanagement.
Today, we are already consuming “serverless” services from the cloud service providers but, in the future, we may have new business model where the enterprises just pay for the services and don’t have to worry about where exactly it’s hosted. In the current product market, acquisition is a common strategy adopted by companies to expand their customer base, add unique services to their portfolio, or/and enhance their capabilities, etc. Tomorrow, the trend may continue among the Hyperscalers too. Who knows what’s next in the technology roadmap?
I would like to thank Satish Billakota, Vice President – Cloud Services, for his esteemed guidance in shaping up this article.
By Harish Chauhan
Harish Chauhan has overall 30 years of IT experience, has numerous technical publications, four Patents (three granted), IBM Red Book and two Whitepapers on Big Data to his credit. His area of specialization includes Cloud Computing, Big Data/Hadoop, Containers/Kubernetes, Automation, DevOps, etc. He holds bachelor’s degree in Computer Science and Engineering. He holds 5 GCP Certifications, Azure Certification, ITIL3 Foundation Certified & Cloudera Administrator for Hadoop Certified.