In part one of this two-part blog post, we began our deep dive into Apache Spark tuning to optimize resources. We looked at what is involved in executor and partition sizing, particularly choosing the number of partitions and choosing an executor size. After establishing some principles of optimization here, we ended by asking an important question: Is it really practical for all applications to be optimized?

As our recent State of the Market report helped reveal, the answer is two-sided. The good news? Yes. The bad news? It’s pretty much impossible without the right tools.

The Challenge of Optimizing All Your Apps

It’s actually quite difficult even for knowledgeable developers to keep their jobs optimized because it’s not always clear how much resources an application will use. Pepperdata Platform Spotlight helps with this by showing both the allocated resources and the used resources for a given job run (see Figure 1).

state of the market2 state of the market pt. 2 1

Figure 1: Pepperdata chart showing allocated and used resources.

With the data for allocated and used resources, we can make a judgment as to whether the application is making efficient use of its allocated resources. In this example, from an actual customer’s application, it appears that very little of the allocated resources are actually being used. Therefore, we’d recommend decreasing the size of the app’s executors.

As useful as these charts are, we need to constantly monitor how our applications are running. The Pepperdata App Details page is a good way to see the recent history of an application’s runs (see Figure 2).

state of the market2 state of the market pt. 2 2 1

Figure 2: Pepperdata App History page.

It’s clear that tuning Spark applications and keeping them optimized requires an investment of time to test and monitor resources. For a developer who’s handling data at the Petabyte scale, it’s completely understandable that these things fall through the cracks.

And when you extrapolate these difficult chores over hundreds of users, who are handling tens of thousands of jobs every month, it’s clearly a daunting task. The fact of the matter is that organizations do not actually carry out Spark tuning and optimization. Our observations have shown that developers tend to size their executors overly large to be on the safe side and to avoid out-of-memory errors. But while understandable, this typically leads to seriously under-utilized clusters. Across our customer base, we’ve found that only 29.8% of allocated resources for Spark applications are actually being used—a costly way to play it safe, indeed.

So what’s the remedy? Again… it’s Pepperdata: Pepperdata Capacity Optimizer.

Pepperdata Capacity Optimizer

Capacity Optimizer is our approach to minimizing resource waste through tuning Spark and MapReduce (Hadoop) applications. It runs in the background and is completely transparent to developers, freeing them to focus on the thing they were hired for: developing applications to support business goals.

Capacity Optimizer works by communicating with the YARN Resource Manager (RM), telling it how much more load each of the hosts in the cluster can handle. Usually, the RM looks at the resource allocation parameters, yarn.nodemanager.resource.memory-mb, and yarn.nodemanager.resource.cpu-vcores to see how many resources are available on a host and to determine whether a host can take on another executor. If applications are using less than 30% of their allocated resources, the host can typically manage many more applications.

In addition to allocated resources, Capacity Optimizer also looks at used resources and other metrics to make intelligent decisions about a host’s true capacity. If it finds that a host is full in terms of allocated resources, but not full in terms of used resources, it tells the RM to schedule additional executors on it.

Let’s walk through an example of what Capacity Optimizer can do for us:


NodeA in a YARN cluster has 32 GB of physical memory and 32 physical cores. The Node Manager on NodeA is configured so that its allocation matches its physical resources.

A Spark app, app1, asks the RM for 4 GB of memory and 1 core for each executor.

NodeA Single app1


8 executors run on NodeA
Static (RM) Allocation 32 GB, 32 cores total 4 GB, 1 core


32 GB, 8 cores allocated

With that resource allocation, NodeA can run up to eight (8) of app1’s executors. But in this typical example, we find that most of app1’s executors use only 2 GB of physical memory and 1 physical core, which means that the eight (8) executors in app1 end up using a total of only 16 GB physical memory and 8 physical cores.

NodeA Single app1


8 executors run on NodeA
Physical Resources 32 GB, 32 cores total 2 GB, 1 core


16 GB, 8 cores


Capacity Optimizer will see this situation, go through its algorithm, and might determine that NodeA can manage with 48 GB of allocated resources—more than are actually physically available, but less than what the executors actually use. Capacity Optimizer would then tell the RM that NodeA can handle more resource allocations: up to 48 GB. With this level of resource allocation, the RM will calculate that NodeA can run up to 12 executors—a 50% increase over its original calculation of eight (8)—and will be able to schedule more workload.

NodeA Single app1


12 executors run on NodeA
Dynamic (Capacity Optimizer) Allocation 48 GB, 32 cores total 4 GB, 1 core


48 GB, 12 cores allocated
Physical Resources 32 GB, 32 cores total 2 GB, 1 core


24 GB, 12 cores


This example shows the power of Capacity Optimizer to enable each host in the cluster to utilize more resources, and ultimately perform more work, with no developer effort or intervention.

While Capacity Optimizer doesn’t tune Spark apps directly, the results are largely the same. We have Capacity Optimizer working for our largest customers, who in aggregate run thousands of hosts and millions of jobs. During the busiest of periods, we’ve seen increases of greater than 60% in the number of tasks being executed, resulting in millions of dollars saved each year.


We’ve described two approaches to effectively tune Spark applications.

  • The first approach is the traditional way: investing in time for developers to gain expertise in the inner workings of Spark executors, performing iterative testing, and constantly and carefully monitoring your jobs. Pepperdata application management tools provide essential data for those efforts, but cannot remove the demand on developers’ time.
  • The second approach is simpler. To detect underutilized hosts and enable additional tasks to be executed on them, use Pepperdata Capacity Optimizer. It happens behind the scenes, automatically; it places no demands on the development team; and has proven to be effective for our customers.

To read more on our recent findings regarding wastage and optimization in big data applications, including Spark tuning, download the Big Data Performance Report 2020.

Explore More

Looking for a safe, proven method to reduce waste and cost by up to 47% and maximize value for your cloud environment? Sign up now for a free savings assessment to see how Pepperdata Capacity Optimizer can help you start saving immediately.