The Future of Hyperscalers: An Interview with Jeff Collins on the Enterprise Exodus and Emerging Trends

In this interview with CloudTweaks, we dive deep into the future of hyperscalers with Jeff Collins, a seasoned IT expert with 24 years of experience. Jeff, who recently contributed an insightful article on the ongoing enterprise exodus from hyperscalers, continues the conversation on shifting cloud landscapes. With a robust background, including roles at AT&T, QTS Data Centers, 2nd Watch, and Armor, Jeff has always been at the forefront of cloud solutions, security, and FinOps. Currently leading the Private Cloud Solution at Hivelocity, he is driving innovation in colocation, connectivity, and cloud management. Join us as Jeff shares his unique perspective on the challenges and opportunities shaping the future of hyperscalers, while ensuring secure and cost-effective environments for businesses navigating this transformation.

What are the primary reasons enterprises are considering moving away from hyperscalers like AWS, Azure, and Google Cloud?

As company’s applications continue to evolve, more and more of them are getting away from the traditional way of thinking about deployment. Far gone are the days of the dedicated server and typical three-tier architecture. Clients are also adopting application modernization and re-thinking how their applications are deployed. They are utilizing development operations (DevOps) methodologies and embracing CI/CD pipelines to continually release code updates. Think of your iPhone, your kid’s Fortnite video game, or even your Tesla continuously getting new features and updates on a constant basis.

More and more clients are moving to a hybrid environment where they are analyzing their application, what it does, who it serves, and which environment it will run the best in. This is has caused companies to do two things:

  1. Look at alternatives to public cloud due to a variety of reasons (in-house technical experience, costs, migration, etc.)
  2. Adopt more advanced public cloud native services (serverless, Kubernetes/containers, IaC, machine learning, AI, data analytics/DataOps, etc.) to run their applications more efficiently.

Not only does this approach allow companies to enhance their applications; it also allows them to not put all their eggs into one basket. They have the freedom to diversify and make their applications more efficient to drive their business and bottom line.

How do cost considerations play a role in the decision-making process for enterprises exploring alternatives to hyperscalers?

Cost has always been and will continue to be a concern for any business. The fact of the matter is, if you don’t know what you are doing in public cloud, you can very easily see your costs escalate and spiral out of control. Within the past several years, this concern around cost has built the business of Financial Operations (FinOps) and Cloud Cost Optimization. Saving companies money in public cloud is a huge business, and many companies pay providers for this service.

Customers are increasingly looking for more of a fixed pricing model that’s easier to budget for as well, and that has continued to grow in the bare metal, private/multi-tenant, and colocation spaces.

What are some of the key challenges enterprises face when transitioning away from hyperscalers, and how can they mitigate these risks?

Migration out of public cloud can be a daunting task. Much like a casino in Vegas, they purposely make it easy to get in (Azure Migrate is a great example), but hard to get out, and will typically introduce financial penalties (large egress fees) to dissuade their clients from leaving. Even the promise of container portability has had its challenges as cloud native container products such as ECS/EKS on AWS and AKS on Azure are actually stickier than advertised. Offering alternative solutions such as Tanzu running on VMware has proven to be a good alternative, since these typically do not have the “egress taxes” that are imposed from public cloud providers.

Transitioning from hyperscalers presents significant challenges. Technical complexity is foremost – migrating applications and data requires careful planning to prevent disruptions. This risk can be mitigated through phased migration approaches with clearly defined rollback procedures. Skill gaps emerge as teams familiar with hyperscaler environments need retraining on new platforms, requiring investment in training and potentially temporary specialists. Service continuity remains critical and can be maintained by implementing parallel environments while gradually shifting workloads after thorough testing. Integration issues arise from connected systems and dependencies, requiring comprehensive dependency mapping before migration. Finally, finding replacements for hyperscaler-specific services often necessitates identifying functional alternatives or redesigning around open standards.

How does the rise of multi-cloud and hybrid cloud strategies influence the shift away from reliance on a single hyperscaler?

Clients are quickly adopting hybrid strategies in which to run their applications. Some may be able to take advantage of Google Cloud Platform (GCP), where others are heavily entrenched in Microsoft and run more efficiently in Azure. Others are completely hardware and platform-agnostic and may simply need bare metal to run most efficiently.

Multi-cloud and hybrid cloud strategies are fundamentally changing how enterprises approach cloud computing. These approaches enable organizations to select optimal services from different providers based on specific needs rather than committing entirely to one ecosystem. They allow for gradual transitions rather than complete migrations, reducing risk. By distributing workloads across multiple environments, businesses gain redundancy and improved disaster recovery capabilities. This diversification also creates leverage in vendor negotiations and allows matching of workloads to their ideal environments for cost, performance, or compliance reasons. While multi-cloud approaches introduce additional complexity in orchestration and management, they ultimately provide greater flexibility and resilience against vendor-specific risks.

What role does vendor lock-in play in the growing dissatisfaction with hyperscalers, and how can enterprises avoid it?

Back to what I said above; it’s easy to get in and hard to get out in most cases, so approaching IT from a hybrid perspective and determining where that application will run best is crucial to avoid lock-in, especially when it comes to deploying in AWS, Azure, or GCP. Another major consideration to take into account is the knowledge of your IT staff. Companies may invest in AWS or Azure certified engineers (not a cheap headcount, by the way) to operate their applications and can easily get locked in that way as well. Your AWS-certified cloud engineer may not have any expertise in VMware, for example, and therefore you are locked into a provider from both a technology and labor perspective.

Can you discuss the importance of performance and latency optimization in driving enterprises toward alternative cloud providers or on-premise solutions?

Latency for any application is a concern. You always want to ensure that you have enough compute resources to power your application, ensure that your end users are geographically close enough to reduce latency, and lastly, make sure that whatever network is being utilized has enough bandwidth to support the traffic.

Public cloud providers have built out various geographic regions and availability zones as well as integrated CDN services within those regions to solve these issues.

How do emerging technologies like edge computing and distributed cloud architectures contribute to the exodus from hyperscalers?

Alternative providers have taken this same approach when it comes to Enterprise Cloud locations, bare metal data centers, edge computing, and incorporating SDWAN and content delivery network (CDN) technologies into their networks to ensure performance and minimize latency.

What are the advantages of working with smaller, more specialized cloud providers compared to hyperscalers, particularly in terms of customer support and flexibility?

Smaller providers can provide end-users with more of a personalized service, where they work with the same group of engineers that support their environment versus just being a number. More specialized cloud providers can also offer certain specialties related to compliance, application management, security, data operations, etc.

How do enterprises balance the need for scalability and reliability when moving away from hyperscalers to alternative solutions?

I think they demand both scalability and reliability from all of their solution providers, be it hyperscalers or alternative providers. Having built managed services from the product perspective on all major public clouds (AWS, Azure, GCP) as well as VMware, I can say that at the end of the day, clients want their environments to work, and when there are issues (we all know that in the world of IT there are always issues), the provider responds in a timely manner, engages the correct resources, communicates well, and remediates quickly.

In the context of this shift, how do providers like Hivelocity position themselves as competitive alternatives to hyperscalers, especially for businesses seeking high-performance infrastructure and personalized service?

We at Hivelocity have built a product portfolio to address many of our clients’ needs. We offer services for co-location, VPS/VDS, Bare Metal, and Enterprise Cloud and have service levels based on how hands on our clients want to be. Most of our clients have embraced the CAPEX model that IaaS brings to the table, and we offer fully managed services on Enterprise Cloud for clients who don’t want to deal with deploying, monitoring, and managing their compute and storage resources and would rather focus their resources on running their businesses.

By Randy Ferguson