OPTIMIZE PERFORMANCE FOR YOUR ENTIRE BIG DATA STACK

PLATFORM SPOTLIGHT

APPLICATION SPOTLIGHT

CAPACITY OPTIMIZER

The 451 Take on Cloud-Native: Truly Transformative for Enterprise IT

Helping to shape the modern software development and IT operations paradigms, cloud-native represents a significant shift in enterprise IT. In this report, we define cloud-native and offer some perspective on why it matters and what it means for the industry.

Elements of Big Data APM Success

Pepperdata delivers proven big data APM products, operational experience, and deep expertise.

PLATFORM SPOTLIGHT
PLACEHOLDER

Request a trial to see firsthand how Pepperdata big data solutions can help you achieve big data performance success. Pepperdata’s proven APM solutions provide a 360° degree view of both your platform and applications, with realtime tuning, recommendations, and alerting. See and understand how Pepperdata big data performance solutions helps you to quickly pinpoint and resolve big data performance bottlenecks. See for yourself why Pepperdata’s big data APM solutions are used to manage performance on over 30K Hadoop production clusters.

Request Trial

Resources

Cloudwick Collaborates with Pepperdata to Ensure SLAs and Performance are Maintained for AWS Migration Service

Pepperdata Provides Pre- and Post-Migration Workload Analysis, Application Performance Assessment and SLA Validation for Cloudwick AWS Migration Customers

San Francisco — Strata Data Conference (Booth 926)  — March 27, 2019 — Pepperdata, the leader in big data Application Performance Management (APM), and Cloudwick, leading provider of digital business services and solutions to the Global 1000, today announced a collaborative offering for enterprises migrating their big data to Amazon Web Services (AWS). Pepperdata provides Cloudwick with a baseline of on-premises performance, maps workloads to optimal static and on-demand instances, diagnoses any issues that arise during migration and assesses performance after the move to ensure the same or better performance and SLAs.

“The biggest challenge for enterprises migrating big data to the cloud is ensuring SLAs are maintained without having to devote resources to entirely re-engineer applications,” said Ash Munshi, Pepperdata CEO. “Cloudwick and Pepperdata ensure workloads are migrated successfully by analyzing and establishing a metrics-based performance baseline.”

“Migrating to the cloud without looking at the performance data first is risky for organizations and if a migration is not done right, the complaints from lines of business are unavoidable,” said Mark Schreiber, General Manager for Cloudwick. “Without Pepperdata’s metrics and analysis before and after the migration, there is no way to prove performance levels are maintained in the cloud.”

For Cloudwick’s AWS Migration Services, Pepperdata is installed on customers’ existing, on-premises clusters — it takes under 30 minutes — and automatically collects over 350 real-time operational metrics from applications and infrastructure resources, including CPU, RAM, disk I/O, and network usage metrics on every job, task, user, host, workflow, and queue. These metrics are used to analyze performance and SLAs, accurately map workloads to appropriate AWS instances, and provide cost projections. Once the AWS migration is complete, the same operational metrics from the cloud are collected and analyzed to assess performance results and validate migration success.

To learn more, stop by the Pepperdata booth (926) at Strata Data Conference March 25-28 at Moscone West in San Francisco.

More Info

About Pepperdata
Pepperdata (https://pepperdata.com) is the leader in big data Application Performance Management (APM) solutions and services, solving application and infrastructure issues throughout the stack for developers and operations managers. The company partners with its customers to provide proven products, operational experience, and deep expertise to deliver predictable performance, empowered users, managed costs and managed growth for their big data investments, both on-premise and in the cloud. Leading companies like Comcast, Philips Wellcentive and NBC Universal depend on Pepperdata to deliver big data success.

 Founded in 2012 and headquartered in Cupertino, California, Pepperdata has attracted executive and engineering talent from Yahoo, Google, Microsoft and Netflix. Pepperdata investors include Citi Ventures, Costanoa Ventures, Signia Venture Partners, Silicon Valley Data Capital and Wing Venture Capital, along with leading high-profile individual investors. For more information, visit www.pepperdata.com.

About Cloudwick

Cloudwick is the leading provider of digital business services and solutions to the Global 1000. Its solutions include data migration, business intelligence modernization, data science, cybersecurity, IoT and mobile application development and more, enabling data-driven enterprises to gain competitive advantage from big data, cloud computing and advanced analytics. Learn more at www.cloudwick.com.

###

Contact:
Samantha Leggat
samantha@pepperdata.com

Pepperdata and the Pepperdata logo are registered trademarks of Pepperdata, Inc. Other names may be trademarks of their respective owners.

March 27, 2019

Pepperdata Announces Free Big Data Cloud Migration Cost Assessment to Automatically Select Optimal Instance Types and Provide Accurate Cost Projections

Pepperdata Eliminates Guesswork and Complexity Associated with Identifying Best Candidate Workloads Down to Queue, Job and User Level, for Moving to AWS, Azure, Google Cloud or IBM Cloud

CUPERTINO, Calif. — March 6, 2019 — Pepperdata, the leader in big data Application Performance Management (APM), today announced its new Big Data Cloud Migration Cost Assessment for enterprises looking to migrate their big data workloads to AWS, Azure, Google Cloud or IBM Cloud. By analyzing current workloads and service level agreements, the detailed, metrics-based Assessment enables enterprises to make informed decisions, helping minimize risk while ensuring SLAs are maintained after cloud migration.

The Pepperdata Big Data Cloud Migration Cost Assessment provides organizations with an accurate understanding of their network, compute and storage needs to run their big data applications in the hybrid cloud. Analyzing memory, CPU and IO every five seconds for every task, Pepperdata maps the on-premises workloads to optimal static and on-demand instances on AWS, Azure, Google Cloud, and IBM Cloud. Pepperdata also identifies how many of each instance type will be needed and calculates cloud CPU and memory costs to achieve the same performance and SLAs of the existing on-prem infrastructure.

“When enterprises consider a hybrid cloud strategy, they estimate the cost of moving entire clusters, but that’s not the best approach,” said Ash Munshi, Pepperdata CEO. “It’s far better to identify specific workloads that can be moved to take full advantage of the pricing and elasticity of the cloud. Pepperdata collects and analyzes detailed, granular resource metrics to accurately identify optimal workloads for cloud migration while maintaining SLAs.”

The Big Data Cloud Migration Cost Assessment enables enterprises to:

  • Automatically analyze every workload in your cluster to accurately determine their projected cloud costs
  • Get cost projections and instance recommendations for workloads, queues, jobs, and users
  • Map big data workloads to various instance types including static and on-demand
  • Compare AWS, Azure, Google Cloud, and IBM Cloud

Availability

Pepperdata Big Data Cloud Migration Cost Assessment is available free at pepperdata.com/free-big-data-cloud-migration-cost-assessment. Pepperdata customers should email support@pepperdata.com for their free assessment.

Learn more:

About Pepperdata
Pepperdata (https://www.pepperdata.com) is the leader in big data Application Performance Management (APM) solutions and services, solving application and infrastructure issues throughout the stack for developers and operations managers. The company partners with its customers to provide proven products, operational experience, and deep expertise to deliver predictable performance, empowered users, managed costs and managed growth for their big data investments, both on-premise and in the cloud. Leading companies like Comcast, Philips Wellcentive and NBC Universal depend on Pepperdata to deliver big data success.

 Founded in 2012 and headquartered in Cupertino, California, Pepperdata has attracted executive and engineering talent from Yahoo, Google, Microsoft and Netflix. Pepperdata investors include Citi Ventures, Costanoa Ventures, Signia Venture Partners, Silicon Valley Data Capital and Wing Venture Capital, along with leading high-profile individual investors. For more information, visit www.pepperdata.com.

###

Contact:
Samantha Leggat

925-447-5300
samantha@pepperdata.com

Pepperdata and the Pepperdata logo are registered trademarks of Pepperdata, Inc. Other names may be trademarks of their respective owners.

March 5, 2019

Pepperdata Unveils 360° Reports, Enabling Enterprises to Make More Informed Operational Decisions to Maximize Capacity and Improve Application Performance

360° Reports Empower Executives to Better Understand Financial Impacts of Operational Decisions

CUPERTINO, Calif. — February 19, 2019 — Pepperdata, the leader in big data Application Performance Management (APM), today announced the availability of 360° Reports for Platform Spotlight. Pepperdata 360° Reports leverage the vast amount of proprietary data collected and correlated by Pepperdata to give executives capacity utilization insights so they better understand the financial impacts of operational decisions.

“Pepperdata 360° Reports demonstrate the power of data and the valuable insights Pepperdata provides, enabling enterprises to make more informed and effective operational decisions,” said Ash Munshi, Pepperdata CEO. “Operators get a better understanding of what and where they’re spending, where waste can be reclaimed, and where policy and resource adjustments can be made to save money, maximize capacity and improve application performance.”

360° Reports for Pepperdata Platform Spotlight include:

  • Capacity Optimizer Report: This gives operators insight into memory and money saved by leveraging Pepperdata Capacity Optimizer to dynamically recapture wasted capacity.
  • Application Waste Report: This report compares memory requested with actual memory utilization so operators can optimize resources by changing resource reservation parameters.
  • Application Type Report: This gives operators insight on the technologies used across the cluster and the percentage of each (percentage of Spark jobs, etc.). This provides executives with insights into technology trends to make more data-driven investment decisions.
  • Default Container Size Report: This report identifies jobs using default container size and where any waste occurred so operators can make default container size adjustments to save money.
  • Pepperdata Usage Report: This presents Pepperdata dashboard usage data, highlighting top users, days used, and more to give operators insights to maximize their investment. With this data, operators can identify activities to grow the user base, such as promoting features, scheduling onboarding sessions, and training on custom alarms.

Availability

Pepperdata 360° Reports are available immediately for Pepperdata Platform Spotlight customers. For a free trial of Pepperdata, visit https://www.pepperdata.com/trial.

About Pepperdata
Pepperdata (https://pepperdata.com) is the leader in big data Application Performance Management (APM) solutions and services, solving application and infrastructure issues throughout the stack for developers and operations managers. The company partners with its customers to provide proven products, operational experience, and deep expertise to deliver predictable performance, empowered users, managed costs and managed growth for their big data investments, both on-premise and in the cloud. Leading companies like Comcast, Philips Wellcentive and NBC Universal depend on Pepperdata to deliver big data success.

 Founded in 2012 and headquartered in Cupertino, California, Pepperdata has attracted executive and engineering talent from Yahoo, Google, Microsoft and Netflix. Pepperdata investors include Citi Ventures, Costanoa Ventures, Signia Venture Partners, Silicon Valley Data Capital and Wing Venture Capital, along with leading high-profile individual investors. For more information, visit www.pepperdata.com.

###

Contact:
Samantha Leggat
samantha@pepperdata.com

Pepperdata and the Pepperdata logo are registered trademarks of Pepperdata, Inc. Other names may be trademarks of their respective owners.

Sample report attached.

Sample Capacity Optimizer Report – memory and money saved with Capacity Optimizer

February 19, 2019

AI – Separating Hype from Reality

Artificial intelligence (AI) makes it possible for machines to learn from experience, adjust to new inputs and perform human-like tasks. Most AI examples that you hear about today – from chess-playing computers to self-driving cars – rely heavily on deep learning and natural language processing. Using these technologies, computers can be trained to accomplish specific tasks by processing large amounts of data and recognizing patterns in the data. AI’s recent resurgence can be attributed to increased data volumes, advanced algorithms, and improvements in computing power and storage, but AI is not new.  The term artificial intelligence was coined in 1956 by John McCarthy.

Early AI research in the 1950s explored topics like problem-solving and symbolic methods. In the 1960s, the US Department of Defense took interest in this type of work and began training computers to mimic basic human reasoning. For example, the Defense Advanced Research Projects Agency (DARPA) completed street mapping projects in the 1970s. And DARPA produced intelligent personal assistants in 2003, long before Siri, Alexa or Cortana were household names.  These efforts paved the way for the automation and formal reasoning that we see in computers today, including decision support systems and smart search systems that can be designed to complement and augment human abilities.

Even as artificial intelligence has become the most disruptive class of technologies driving digital business forward, there is confusion about what it is, and what it can and cannot do—even among otherwise tech-savvy professionals. If you search the web, you’ll find as many definitions of AI as there are people who write them.  So let’s take a different approach and identify what AI can do in an applied environment.

The 6 Pillars of AI

  1. AI automates repetitive learning and discovery through data. But AI is different from hardware-driven, robotic automation. Instead of automating manual tasks, AI performs frequent, high-volume, computerized tasks reliably and without fatigue. For this type of automation, human inquiry is still essential to set up the system and ask the right questions.
  2. AI adds intelligence to existing products. In most cases, AI will not be sold as an individual application. Instead, products you already use will be improved with AI capabilities, much like Alexa was added as a feature to Amazon. Automation, conversational platforms, bots and smart machines can be combined with large amounts of data to improve many technologies.
  3. AI adapts through progressive learning algorithms to let the data do the programming. AI finds structure and regularities in data so that the algorithm acquires a skill: The algorithm becomes a classifier or a predictor. Just as the algorithm can teach itself how to play chess, it can also teach itself what product to recommend online, and adapt when given new data.
  4. AI analyzes more and deeper data using neural networks that have multiple hidden layers. Building a fraud detection system with five hidden layers was almost impossible a few years ago, but that has changed with incredible computer power and big data. Deep learning models require lots of data because they learn directly from the data. The more data you can feed them, the more accurate they become.
  5. AI achieves incredible accuracy through deep neural networks. For example, your interactions with Alexa, Google Search and Google Photos are all based on deep learning, and they become more accurate the more we use them. In the medical field, AI techniques from deep learning, image classification and object recognition can now be used to find cancer on MRIs and match the same accuracy as highly trained radiologists.
  6. AI extracts the most value out of data. When algorithms are self-learning, the data itself can become intellectual property. The answers are in the data; you just have to apply AI to get them out. Since the role of the data is now more important than ever before, it can create a competitive advantage. If you have the best data in a competitive industry, even if everyone is applying similar techniques, the best data will win.

Peak Hype for AI?

Every new technology goes through a hype cycle in which the news coverage is strongly positive at first, often gushing with the possibility for life-altering transformation. Even though AI (Artificial Intelligence) is not new, and having already experienced hype cycles, the current cycle which began in 2012, has been notable for the sheer volume of media coverage.

Gartner’s Hype Cycle tracks emerging information technologies in their journey towards mainstream adoption. It is designed to help companies tell hype from viable business opportunity, and give an idea when that value may be realized.

AI is at peak hype—and still an unknown quantity for many. Even as artificial intelligence is set to become the most disruptive class of technologies in driving digital business forwards during the next 10 years, there is confusion on what it is, and what it can and cannot do—even among otherwise tech-savvy professionals.

AI is at peak hype: “Democratized Artificial Intelligence” was recognized as one of the five megatrends in Gartner’s most recent (2018) Hype Cycle.  Machine learning and deep learning are at peak hype, and predicted to be 2-5 years away from mainstream adoption. Cognitive computing is also at peak hype, but up to 10 years away, while artificial general intelligence (AI with the ‘intelligence’ of an average human being) is 10+ years away and in early innovation phase.

Confusion and Unsubstantiated Vendor Claims

The Verge recently reported that many companies in Europe are taking advantage of the AI hype to make unsubstantiated claims in an effort to generate excitement and increase sales and revenue.  According to a survey from London venture capital firm MMC, 40 percent of European startups that are classified as AI companies don’t actually use artificial intelligence in a way that is “material” to their businesses. MMC studied some 2,830 AI startups in 13 EU countries to come to its conclusion, reviewing the “activities, focus, and funding” of each firm in a comprehensive report published earlier this year. This situation is certainly not limited to EU-based vendors, it’s a global issue.

Tech Talks also identified similar misuse by companies claiming to use machine learning and advanced artificial intelligence to gather and examine data to enhance the user experience in their products and services. Many in the tech industry and the media are also confusing what is truly AI, and what is truly machine learning.  We’ll take a closer look at that quandary in a future blog.

Other Things You Can Do

June 12, 2019

Now That You’re in the Cloud: How to Manage Costs?

Remember the early days of moving to the cloud? It seemed like there was nothing the cloud couldn’t do — and for less money than dedicated on-prem servers.  A lot has changed in just a few years. In the early days of cloud adoption, it was normal for businesses to have about five percent of workflows in the cloud. Today, that number is closer to 30 percent.

A 2018 cloud computing study by IDG Communications found that organizations continue to increase their investment and evolve their cloud environments to leverage the technology to drive their businesses forward. With 73 percent of the 550 surveyed organizations having at least one application, or a portion of their computing infrastructure already in the cloud, it is no longer a question of if organizations will adopt cloud, but how.

That increase in cloud usage has come with a similar increase in cost. Unfortunately, the cost hikes weren’t part of the plan.  Gartner estimates that by 2020, 80 percent of organizations will have overshot their cloud budgets, largely because they lack optimization1. A systemic failure to plan has led to cloud costs getting out of control.

The failure to manage cloud costs are attributed to three key challenges organizations are struggling to overcome:

  • Complex multi-cloud environments are becoming commonplace, and billing details can vary significantly depending on the provider.
  • Many I&O teams are operationalized for traditional data center principles rather than cloud IaaS, and they lack the organizational processes to manage costs in the cloud.
  • There are many options to address cloud expense management. As a result, I&O leaders may struggle to align options with their organization’s cloud strategy.

It’s a common belief that operating a data center in the cloud is always cheaper than on-premises. While this can be true, it’s not an absolute. In some cases, cloud operating costs can actually be higher. This can be caused by poor cost governance, misalignment of system architectures, duplication of processes, unusual system configurations, or increased staffing costs. Even IT pros who have been working for a long time in on-prem data centers fail to consider the costs of cloud computing services, so don’t do enough to deal with ongoing cloud cost management or usage monitoring. That is, until they get that $300,000 bill.

According to Gartner, cloud services can have a 35% underutilization rate in the absence of effective management, as resources are oversized and left idling. In the course of regular cloud operations, resources can be abandoned and accrue charges without contributing value.  This is consistent with our observations at Pepperdata. While working with organizations that are migrating workloads to the cloud, we often observe unnecessary over-provisioning of resources.

The good news is that controlling cloud costs is well within the capabilities of most organizations. The Pepperdata Capacity Optimizer provides a way for I&O teams to efficiently analyze and right-size their on-prem and cloud resources to more closely match utilization, which typically results in significant cost savings.  When used as part of an organization’s cloud usage management process, it can also help to keep cloud costs more predictable by tracking and reporting on-going usage.

The Big Data Cloud Migration Cost Assessment tool from Pepperdata enables you to compare recommended instances and costs across AWS, Azure, Google Cloud Platform, and IBM Cloud. Pepperdata can provide you with an independent third-party analysis of your on-premises workloads along with detailed performance profiles for mapping to the cloud. The Pepperdata assessment tool uses this data to recommend the most cost-effective cloud instances that match your on-premises SLAs and deliver optimal application performance.

Some public cloud service providers also have pricing calculators that can help you to estimate the costs you’ll face after a cloud migration vs. your current costs. The AWS TCO Calculator and the Azure Pricing Calculator are two available options.

_____________________________

1 Gartner – How to Identify Solutions for Managing Costs in Public Cloud IaaS; Brandon Medford, Craig Lowery, 22 September 2018

Other Things You Can Do

June 4, 2019

How AI Helps ITOps Keep Pace with Cloud Adoption

ITOps (IT Operations) was never easy. Many environments grew organically, with new equipment added over the years. Most enterprises have infrastructure from multiple vendors, each of whom requires companies to update to the latest releases and patches.  These factors have made ITOps ever more complex. And the trend to a hybrid cloud strategy that many companies are adopting now makes ITOps’ job even more challenging. The issue is not the cloud itself, but the rapid rate of adoption and difficulty in getting operational cloud know-how.

Spending on cloud computing infrastructure continues to grow at a furious pace. Global cloud infrastructure services market grew 42 percent year-on-year in the first quarter of 2019 with Amazon Web Services (AWS) making the biggest gain in dollar terms with sales up by $2.3 billion (41%) on Q1 2018, according to data from tech analyst firm Canalys.  That performance put AWS further ahead of second-placed Microsoft, even though it grew sales by $1.5 billion or 75 percent. Google was the fastest growing of the top three in percentage terms, up 83 percent from $1.2 billion to $2.3 billion.

A market growing at 42 percent year on year (although slightly slower than the 46% growth in Q4 of last year) is pretty remarkable. But according to Canalys, the battle for enterprise customers will intensify this year as the big cloud vendors seek to maintain that growth.

Many businesses have already finished moving easy-to-shift applications to the cloud, and are maturing their approach to cloud computing. This involves moving to multi-cloud and hybrid-IT strategies that leverage the strengths of different cloud service providers and deployment models to meet a variety of application, compliance, cost and performance requirements.

In an effort to gain a competitive edge and expand their market shares, some cloud service providers are now looking at ways to enter customers’ existing data centers. For example, AWS will start shipping its first appliance, Outposts, later this year, which will see AWS hardware on customer premises, largely to deal with the issue of latency.

Other vendors are looking at how to integrate across multiple clouds, like Google Anthos, an application management platform that supports multiple clouds. Adding new partners or making cloud part of a broader business transformation strategy are other ways that cloud vendors will try to boost sales this year. Most companies will end up using a combination of in-house data centers, plus cloud-computing technologies across a number of vendors. Few will choose just one vendor for every service.

All these factors will put the ITOps team under even greater pressure when an application performance problem surfaces.  ITOps teams also suffer from alert information overload. Too many false positives and too little information can make root cause investigations a never-ending search.  The answer lies in artificial intelligence (AI). Many ITOps tasks are routine and alert-based, so why not train a bot or an algorithm to build a machine learning model that can reduce root cause MTTR by 95% and enable the team to be proactive?

This is the value proposition of AIOps or Artificial Intelligence for IT Operations. It speeds MTTR, informs ITOps on possible issues before they turn into problems, and changes operational modes from reactive to proactive.

There are significant differences between AIOps products. One way Pepperdata differs is in the number of metrics and amount of data being collected for analysis. You need a consistent flow of data to turn an AI algorithm into a viable model. The more meaningful and complete the metrics are, the more accurate the model will be. Pepperdata captures more than 350 application and infrastructure performance metrics every 5 seconds.  This substantial data set enables the Pepperdata AI model to be much more robust and accurate than alternative approaches. Constantly capturing and analyzing this massive amount of data and metrics is one of our biggest advantages, resulting in major benefits to our customers.

The Pepperdata AI value proposition is simple. We use an AI model to automatically diagnose existing problems faster, identify potential problems, and offer actionable insights for ITOps team.  The Pepperdata AIOps approach also enables ITOps to scale infrastructure to match actual use and eliminate wasted resources that result from over-provisioning. This is extremely important in cloud environments that charge for every bit of CPU, memory, and storage that is being used.  The Pepperdata Capacity Optimizer can automatically help ITOps reduce cloud (and on-prem) infrastructure requirements and associated costs by 30 to 50 percent, eliminating the need for costly and time-consuming manual tuning!

Learn more about Capacity Optimizer and how it improves capacity utilization and saves ITOps time and expense.

May 28, 2019