One of our clients is a software developer that specializes in design and manufacturing software solutions. This software firm caters to some of the biggest organizations operating in primary sectors like engineering, architecture, manufacturing, entertainment, and more. Every new software and update our client rolls out must meet stringent SLA requirements.
To ensure their products operate as specified by their SLAs, the software company leverages Apache Spark to process and analyze large volumes of big data and glean actionable insights. These insights, in turn, provide the software developer with valuable information that allows them to quickly identify and address performance issues and roll out better products.
But, the company lacked expertise in Spark. On top of that, they didn’t have a comprehensive observability tool to measure Spark performance as well as help them recognize and address the complications stemming from Spark. As a result, the software vendor wasn’t getting the most value out of its Spark investment. Not until Pepperdata came into the picture and changed the way the company worked with Spark.
Spark Complications
While Spark enabled the software firm to analyze large volumes of big data and derive actionable insights on performance, customer journeys, sales, and more, their unoptimized Spark performance was costing them a lot.
Making the situation more complicated was that the software company does not have in-house Spark expertise. Without adequate knowledge and experience of the Spark framework, the software company was forced to do what they believed to be the most logical choice: throw more compute resources at it.
Although such an approach allowed them to run more Spark jobs, the Spark performance itself was not optimized. They also didn’t have an observability tool to see into their Spark applications and workflows. This resulted in more wasted resources and compute capacity. This problem intensified as the company expanded. More customers meant more Spark jobs. The subsequent increase in compute consumption soon took most of their big data budget.
Overspending and Overprovisioning: Not Just Their Problem
The organization’s struggles with Spark aren’t exactly news. Enterprises that rely heavily on Spark for their big data workloads experience the same difficulties, particularly poor resource utilization and overshooting their big data budgets.
Cost is one of the main issues that come with running a big data architecture like Spark. In our recent 2021 Big Data Cloud survey, 64% of companies said they constantly contend with “cost management and containment” when using big data technologies and applications.
While most cloud providers provide autoscaling to help enterprises meet compute resource requirements during traffic surges, default autoscaling configurations are based on peak-level requirements. This means many of the big data processes and workloads are running unoptimized. It’s no wonder why many enterprises are spending as much as 40% more than their initial big data cloud budgets.
Many enterprises ended up overprovisioning compute resources. While this approach ensured that they had enough resources during peak hours, unutilized resources were wasted when traffic was lower than expected. In the world of big data and cloud computing, unutilized resources mean lost money.
A Race Against Time
Big data specialists at the software company acknowledged how difficult it was to tune Spark using the most ideal configurations. But they knew they needed a powerful optimization solution to turn things around and make Spark an efficient, cost-effective big data engine.
By 2020, their data processing requirements increased 10x over the previous year. They needed to find an optimization and observability tool to keep their spending in check and maximize the utilization of their compute resources.
Otherwise, using the same approach of adding more resources would have led to overwhelming costs while still suffering from performance lags and downtime.
Superior Observability with Pepperdata
The software organization knew they required observability into Spark and overall big data infrastructure to identify problems, enhance Spark performance, and bring down Spark costs. Turning to Pepperdata, the company found the superior observability tool they needed.
Not long after the deployment of their Spark observability tool, the software developer was able to derive powerful and actionable insights into their Spark framework. With Pepperdata, they managed to effectively tune Spark. This resulted in applications and workloads running on optimal configurations.
With their compute resources optimized and properly utilized, the software vendor drastically minimized their compute wastage. With more resources available, the company now runs more applications without overshooting their budget.
Interested in how the software company did it with Pepperdata? Read the full case study and discover the Pepperdata advantage now.