Our “Pepperdata Profiles” series shines a light on our talented individuals and explores employee experiences. This week, we sat down with Alex Pierce, a Pepperdata field engineer extraordinaire. Alex shared six years’ worth of insights into optimizing big data cloud performance, and his take on big data’s past, present, and future.

alex pierce

Alex, how does your role as a field engineer differ from other engineering roles?

Good question. In many organizations, my role is called a sales engineer or a solution architect. In short, I don’t write much code in our products, but what I do is support the sales team during the sales process. 

When you’re dealing with software sales, a key practice is validating that our solution works within the customer’s environment. That means working with the customer to install, test, and validate our software in their cloud systems. It doesn’t matter what a customer’s running; every customer’s environment almost always ends up being unique. So there’s always some tweaks needed, some adjustments and reconfigurations to make. 

Sometimes, my role means that I have to go back to the engineering office and say, “Hey, we’ve discovered this one strange configuration that we just don’t work in. Can we get this changed within an X amount of time so that we can continue this process with a potential customer?”

What’s your unique view of the Pepperdata product? What most excites you about it?

At its core, Pepperdata is fairly simple, even though it solves complex problems. One of the most impressive things is how its big data cloud performance tuning shows such rapid time-to-value. 

Often, we will be working with potential customers, and we’ll tell them, “Okay, it’ll take us about an hour to install the software.” They’re like, “We don’t believe you.” And then when we’re 30 minutes through the process, they go, “Oh, you weren’t kidding!” That’s uncommon in the world of enterprise software. 

Having that kind of ease of use and a very straightforward system—I really enjoy that about working with this software. I never feel afraid of standing in front of a customer, hoping against all hope that their environment is perfect for us, or that the solution works. I’ve worked in different companies before this, and I’ve experienced that fear of facing a customer and thinking, “Man, I hope you don’t have X and Y installed as well, or this program is going to break.” But here, that’s never been the issue.

That’s one of the big things that keeps me here, besides enjoying working with the company. We do exactly what we say we do, and we do it without a lot of fanfare. But we always deliver.

That’s awesome. And what other ways do you find Pepperdata to be different from other products?

Currently, it’s what we’re doing with Pepperdata Capacity Optimizer, which is a really unique product. Nobody else does this level of big data cloud performance tuning

For the first couple of years, explaining exactly what the software does and how it works was sometimes met with a lot of incredulity. A lot of people say, “Huh, well, this sounds pretty magical. What does it do?” But the fact that Capacity Optimizer works as well as it does is really cool stuff. It truly is a unique value proposition. We say, “We’re going to optimize big data cloud performance and make it more efficient,” and we do it.

And it’s satisfying to see customers become pleasantly surprised. We literally saved a customer from having to go out and buy 150 or 200 new systems, because that work can now be done in their existing environment. That’s pretty cool.

What do you think of the market that Pepperdata plays in? What changes do you see going forward?

The underlying technologies are always changing. Somebody always comes up with a better execution engine or a more efficient way of organizing queries. But the core of it comes down to the same thing, every time: You have a lot of data, and you want to analyze it in some way so that you can extract intelligence out of it, or use it to build better systems or reporting.

The move to the cloud is going to be an interesting one; that’s really where the future is. These big data sets can only grow larger, and not every company can afford to operate 5,000 servers dedicated strictly to analyzing that volume of data 24/7. The vast majority of systems out there, they’ll need that big data cloud performance and capacity sometime in the future. If they can store all this data in one location, and just access compute when they need to extract intelligence from it, that’s going to be the next big win. 

Right now, this landscape is still in its infancy. It’s still pretty inefficient in terms of launching and fully utilizing the right number of compute instances. But moving the compute part to an operational expense, instead of the traditional capitalized assets of all that hardware, I think is going to be the trend moving forward in this space. And if your customers are already mixing it up, then they should have a combination of both, because some of these companies can’t buy hardware fast enough to meet their growing computer needs.

This is also definitely going to be our contribution moving forward. Pepperdata will optimize big data cloud performance and make sure that you are actually utilizing and getting the value out of the cloud capacity that you’re paying for. 

This big data pressure, is this a pre-existing trend? Or has it just been accelerated by the crazy year we’ve had?

I can’t say for certain if it has accelerated, but it is a general trend. The sheer volume of information being emitted by everything, from simple financial transactions to data from IoT devices and networks, is rapidly increasing. It’s increasing faster than traditional IT assets can keep up.

We’re literally running into this conundrum that, when you’re a big company, you can’t buy hardware and storage capacity fast enough, so they buy a ton all at once. A lot of that actually then becomes underutilized. So it’s wasted money. That’s where cloud systems come in, with their shared assets and multi-tenancy. You can just use the big data cloud performance and capacity you need, when you need it. That’s really where things need to roll. Otherwise, IT won’t be able to keep up with the demands of the business. This is bad, considering many companies already view it as a cost center, rather than a profit center.

And do you often meet customers affected by this conundrum?

Generally, yes. It’s especially true for companies that we work with that have fully utilized cloud-based systems, and are now only discovering what it means to be cloud-native from a cost perspective. They suddenly realize, “Oh, everything we’re doing is great, except we’re purchasing way more cloud capacity than we need. A big data cloud performance tuning tool that helps us launch appropriate capacity at the appropriate times makes such a huge difference.”

If they’re paying attention, every company tries to figure out how to improve big data cloud performance  and does an okay job of looking at the cost of infrastructure they already have. But when they’re immensely profitable, they’re sometimes not as careful as you would think. 

At the same time, we have IT shops that get us and are like, “Oh, now we know what our hardware is doing. Now, we can control our big data cloud performance and on-site spend, and preferably even delay spending much further down the line because we can use our current existing capacity.” Both of those things play very well for us.

Bearing all this in mind, in the six years you’ve worked for Pepperdata, what has changed?

For one, my role has changed. When I started, we were much smaller, and I had a few more responsibilities that I now share with employees. But that’s the nature of any startup, right? 

What I would say the biggest change is though is the market we are targeting. Before, we pretty much exclusively targeted Fortune 500-type companies; basically, the ones already dealing with volumes of data. But now, that’s every company. It’s not just a few dedicated giants anymore. It’s every retail company, every financial service. Even healthcare analytics became a big growth space in this environment. 

The core of what we do is pretty much the same: We help you optimize big data cloud performance and monitor your big data in the cloud more efficiently. Some of that’s automated, some of that’s our analysis. But everybody’s need for what we’ve seen just grew over time, as different spaces have realized they now need something to help control the volumes of data they’re dealing with on a computational basis. So now we see pretty much every vertical talking to us. 

And these companies don’t necessarily have the same kind of deep pockets for large scale hardware spend that some others do. So what they do spend needs to be more efficient (which we assist with), and also likely needs to be in cloud systems (because they just can’t spend the money on dedicated compute). Those are big changes.

You mentioned your take on the financial governance of big data cloud performance and the company’s role in that. Do you see that as one of the challenges many people have in the cloud? 

100%. A lot of these companies are using cloud-based systems, and they believe they’re doing the right thing because their work is hitting and completing SLAs. Then they get the bill at the end of the month, and they’re like: “Wow, this is not what we signed on for. We were expecting a cost reduction, not a cost increase.”

That’s the situation for us as a major market, it’s that what people are asking versus what they are using don’t always align. That’s the hardest part for some people, is how to improve cloud performance without incurring more cost. And there are so many great articles out there about people who have made that discovery the hard way. Even large companies have been smacked over the head by that giant bill from Amazon.

That’s where Pepperdata helps people. It automates scaling up and down, depending on utilization and workload in these dynamic, on-demand operational environments, at the right amount and at the right time. Our big data cloud performance tuning works so that they don’t get that surprise bill at the end of the month.

Any final words you want to say about what these mean for the future of big data?

I think at this point, we kind of see where the future is going. The cloud-based future is happening, and it’s happening rapidly. The biggest thing about the move to the cloud is having these resources be available to everybody. It’s not just the big companies that can afford to buy large amounts of them and monopolize them anymore. And I think it’s going to be interesting to watch the small players have access to some of the big data cloud performance tools that the big players have always had.

The other big change is going to be on the machine learning side—moving from the traditional processor computing model to these dedicated GPU or dedicated high-velocity engines for certain operations. Moving that capacity into the cloud is going to affect the availability of really interesting machine learning models, as well as the need for big data cloud performance monitoring. That’s gonna be exciting to watch, for us and for everybody else. These dedicated GPUs are now in cloud systems and accessible on a time-basis for companies that may not have been able to access them before for a bunch of reasons. I’m looking forward to seeing how that shakes things up.

 

To read more profiles about Pepperdata stars, read part one of our Pepperdata Profiles series.

The views expressed on this blog are those of the author and do not necessarily reflect the views of Pepperdata. Any solutions offered by the author are environment-specific and not part of the commercial solutions or support offered by Pepperdata.

Explore More

Looking for a safe, proven method to reduce waste and cost by up to 50% and maximize value for your cloud environment? Sign up now for a 30 minute free demo to see how Pepperdata Capacity Optimizer Next Gen can help you start saving immediately.