Pepperdata Capacity Optimizer

How it Works

Benefits

How it Works

  • real time icon

    The scheduler makes real-time resource decisions

    The scheduler makes real-time decisions

  • workload node icon

    Pods are spun up based on actual node utilization

    Pods are spun up based on actual node utilization

  • autoscaler icon k8s page

    The autoscaler launches nodes more efficiently

    The autoscaler launches nodes more efficiently

Benefits

  • comp icon bullet 3 k8s

    Optimizes vCPU and memory utilization

    Optimizes vCPU and memory utilization

  • graph icon bullet 2 k8s

    Automatically scales to thousands of nodes

    Automatically scales to thousands of nodes

  • money icon bullet 3 k8s

    Reduces cost everywhere

    Reduces cost everywhere

Built to Support the Most Demanding Workloads at Scale

Supported Workloads

  • Microservices 
  • Apache Spark
  • Apache Airflow 
  • Apache Flink
  • Jobs
  • JobController

Supported Schedulers

  • Default scheduler on Amazon EMR and EKS and GKE 
  • Apache YuniKorn on Amazon EKS

Supported Autoscalers

  • Amazon EMR Managed Autoscaling and Custom Autoscaling Policy
  • Cluster Autoscaler and Karpenter on Amazon EKS
  • Cluster Autoscaler with and without Node Auto-Provisioning (NAP) on GKE

Lower Costs and Increase Resource Utilization

For Your Most Critical Kubernetes Workloads

Pepperdata Capacity Optimizer dynamically closes the gap between allocated and actual resource utilization—resulting in more pods per node, greater utilization, and lower workload cost.

Microservices
Apache Spark
Apache Airflow
Apache Flink
Custom Batch

Achieve resource utilization improvements for long-running microservices

Real-time reclamation of resource waste

Align resource requests with actual hardware usage to reduce waste and lower cloud bills, without impacting response time.

 

Enhanced autoscaling efficiency for Kubernetes workloads

Launch new nodes only when existing nodes are fully utilized without altering downscaling behavior.

 

Increased utilization without disruption

Pack more pods on existing physical nodes by working seamlessly with HPA and without any pod restarts.

Unlock peak Spark workload efficiency with dynamic resource optimization

Reduced instance hours

Continuously optimize your unused node resources to reduce instance hours and costs without recommendations.

 

Enhanced autoscaling efficiency for batch jobs

Ensure new nodes are provisioned only when existing nodes are fully utilized.

 

Optimized memory and CPU usage for Spark executors

Automatically run more Spark workloads on the same hardware for improving efficiency and resource utilization.

Realize up to 40% cost savings with Apache Airflow in real time

Cost-effective Airflow operations

Eliminate the cost and waste resulting from task pods allocating more resources than they ever use.

 

Improved task throughput performance

Reduce pod resource requests automatically so that all existing nodes are continuously packed at optimal capacity.

 

Significantly more efficient autoscaling

Ensure new nodes are provisioned only when existing nodes are fully utilized.

Process data streams with Apache Flink at greater efficiency

Significantly increased efficiency in large-scale Flink environments

Eliminate the cost and waste resulting from overprovisioning pod resources.

 

Optimized resource utilization

Dynamically increase node-level resource utilization by packing more Flink workloads based on actual usage metrics.

 

Minimized developer overhead

Eliminate the need for Flink-level code changes and config tuning.

flink squirrel 500

Apache Flink

Reduce the cost of running Custom Batch workloads by up to 75%

Lowered costs without manual intervention

Eliminate the cost and waste resulting from pods allocating more resources than they ever use.

 

Significantly reduced pod resource requests

Ensure that all existing nodes are optimally packed—automatically, continuously, and in real time.

 

Improved autoscaling efficiency

Ensure new nodes are provisioned only once existing nodes are fully utilized.

cronjob logo

Microservices
Apache Spark
Apache Airflow
Apache Flink
Custom Batch

Achieve resource utilization improvements for long-running microservices

Real-time reclamation of resource waste

Align resource requests with actual hardware usage to reduce waste and lower cloud bills, without impacting response time.

 

Enhanced autoscaling efficiency for Kubernetes workloads

Launch new nodes only when existing nodes are fully utilized without altering downscaling behavior.

 

Increased utilization without disruption

Pack more pods on existing physical nodes by working seamlessly with HPA and without any pod restarts.

Unlock peak Spark workload efficiency with dynamic resource optimization

Reduced instance hours

Continuously optimize your unused node resources to reduce instance hours and costs without recommendations.

 

Enhanced autoscaling efficiency for batch jobs

Ensure new nodes are provisioned only when existing nodes are fully utilized.

 

Optimized memory and CPU usage for Spark executors

Automatically run more Spark workloads on the same hardware for improving efficiency and resource utilization.

Realize up to 40% cost savings with Apache Airflow in real time

Cost-effective Airflow operations

Eliminate the cost and waste resulting from task pods allocating more resources than they ever use.

 

Improved task throughput performance

Reduce pod resource requests automatically so that all existing nodes are continuously packed at optimal capacity.

 

Significantly more efficient autoscaling

Ensure new nodes are provisioned only when existing nodes are fully utilized.

Process data streams with Apache Flink at greater efficiency

Significantly increased efficiency in large-scale Flink environments

Eliminate the cost and waste resulting from overprovisioning pod resources.

 

Optimized resource utilization

Dynamically increase node-level resource utilization by packing more Flink workloads based on actual usage metrics.

 

Minimized developer overhead

Eliminate the need for Flink-level code changes and config tuning.

flink squirrel 500

Apache Flink

Reduce the cost of running Custom Batch workloads by up to 75%

Lowered costs without manual intervention

Eliminate the cost and waste resulting from pods allocating more resources than they ever use.

 

Significantly reduced pod resource requests

Ensure that all existing nodes are optimally packed—automatically, continuously, and in real time.

 

Improved autoscaling efficiency

Ensure new nodes are provisioned only once existing nodes are fully utilized.

cronjob logo

Explore More

Looking for a safe, proven method to reduce resource waste and cost by up to 75% and maximize value for your cloud environment? Sign up now for a free Capacity Optimizer demo to see how you can start saving immediately.