Right-Sizing Workloads for Success in the Cloud

IT and Operations teams are being tasked with creating a strategy to ensure cloud success for their organization. They are often disappointed when, after migrating their workloads to the public cloud,  promised efficiencies and savings just don’t materialize. A report from Bain & Company, Rightsizing Your Way to the Cloud, includes results from an analysis of more than 60,000 workloads. Bain asked more than 350 IT decision-makers what aspects of their cloud deployment had been the most disappointing and under-delivered on their expectations. The top complaint was that the cost of ownership had not declined. In some cases, cost had increased. Why were expectations not being met? 

The report found that when companies don’t perform the necessary assessments and preparation, migrating workloads to the public cloud can be up to 15 percent more expensive than keeping them in a legacy, on-premises environment. In other words, despite the vaunted promise of the cloud, it can be more cost-effective to leave things unchanged.

The Problem? Existing Inefficiencies are Being Transferred to the Cloud

Bain’s analysis revealed that 84 percent of on-premises workloads are over-provisioned with more compute power, memory and storage than they need to efficiently operate. Over-provisioning typically happens with mission-critical workloads when IT operations buys more hardware than is required to ensure they have enough performance to satisfy peak periods of workload demand. This includes scaling out with more servers filled with hard drives to increase compute capacity and minimize latency. While hard drive costs are reasonable, a massive scale-out like this increases power, cooling, and management costs.

When organizations taking this approach migrate a workload to the cloud, they send excess computing and storage capacity right along with it.  Instead of becoming more efficient, they are essentially transferring their existing inefficiencies to a new location, using a method known as lift-and-shift. Bain found that stripping away excess resource capacity can lower cloud-migration costs by up to 60 percent while reducing the long-term costs of running workloads in the cloud. Other IT advisory experts concur. According to a recent cloud cost optimization study by Forrester Research, tackling wasteful cloud usage and exploding cloud spend is the important first step of cloud management.

The Solution? Right-sizing

Right-sizing a workload involves re-assessing the true amount of storage and compute power that it needs. To determine this, organizations can monitor workload demand over a period of time to determine the average and peak compute resource consumption. Organizations that anticipate migrating to the public cloud should take a disciplined approach to rightsizing their workloads that involves a thorough assessment of computing and storage practices across the enterprise. Bain’s experience shows right-sizing IT resources can cut operational and capital expense by as much as 30 to 60 percent.

Cloud Migration is a Long-term Process, Not an Event

Cloud migration is a long-term process which can extend for years for massive, scale-out Hadoop and Spark distributed computing environments.  Most organizations migrate a workload/application at a time. Not every workload belongs in the cloud, and organizations will want to evaluate which ones are suitable candidates.  Getting to the task of actually migrating a workload to the cloud takes time because of the multiple pre-migration steps involved. There will be many workloads still operating on-premises while the migration process is underway.  And workloads that are not good cloud candidates will remain on-premises for the foreseeable future.

Pepperdata customers are realizing the benefits of “right-sizing” cloud and on-premises infrastructure resources today, with Pepperdata Capacity Optimizer.  Capacity Optimizer takes a unique approach to right-sizing by identifying wasted, excess capacity in your big data cluster resources. By monitoring your cloud and on-premises infrastructure in real-time, including hardware and applications, and leveraging AI with active resource management, Pepperdata Capacity Optimizer automatically re-captures wasted capacity from existing resources and adds tasks to those servers.  Your net benefit is an increase in enterprise cluster throughput of 30 to 50 percent, or conversely, a 30 to 50 percent reduction in infrastructure resource requirements.

Whether in the cloud or on-premises, with Capacity Optimizer, you can:

  • Automatically adjust resource utilization to match workload requirements
  • Eliminate unnecessary spend on new hardware, CPU and memory
  • Run more jobs concurrently on existing infrastructure
  • Optimize infrastructure performance and ROI

Sign up for our upcoming webinar, Too Much of Anything? Right-Sizing Your Big Data in the Cloud to learn more on how to right-size for the cloud.