Optimized Time Is Money

“Time is money” has always been a well-worn adage in the financial markets. However, in a world where massive volumes of business are transacted in milliseconds, “optimized time is money” seems more appropriate. The enhanced adage conveys expectations that go well beyond system uptime (enabling business) or downtime (stopping business).

For business executives, the technical details of your organization’s big data and analytics infrastructure is a rabbit hole too deep. However, awareness of how the machinery that powers your client-facing and operational platforms are managed and maintained is important. That is because a slight degradation in performance will have direct implications on business.

In today’s hyper-accelerated and interconnected markets, SLAs can no longer be binary. A 99.999% uptime requirement doesn’t account for what could and does happen at any moment in time therein. It also doesn’t address how disparate systems must function as a fully optimized and integrated set of services.

Digital Complexities within Capital Markets

In our earlier blog post, “Digital Transformation in Banking,” we wrote a lot about the need to optimize the machinery that powers digital transformation. That post is meant to help business executives have constructive conversations with their platform operations and DevOps teams concerning what, how, and why certain infrastructure challenges need to be met.

To help connect the dots, we’ve provided four “data stories” below. Each is an example of the digital and data complexity we regularly see in capital markets, using alternative data, unstructured data, equities research, and trade automation as examples. It also provides context into the criticality of having fully optimized machines to capture, process, present, and act on that data.

Data Drives Markets

The recent and wild market swings of GameStop and AMC stocks brought us a brief respite from Covid-19 news. The related controversies gave us plenty to read and re-tweet, and it also made evident just how intertwined the financial markets are with social media. For example, the House Financial Services Committee Congressional hearings brought together in one virtual room, three very disparate actors of this drama: an incumbent hedge fund (Citadel), an upstart online brokerage firm (RobinHood), and a social media platform (Reddit).

Controversies aside, this is a great example of data driving markets.

We live in a world that is drowning in all sorts of data. In the GameStop/AMC scenario, the data came from online chat rooms and electronic bulletin boards. But there are plenty of other alternative non-financial types of data that drive markets. Hedge funds, global banks, and day-traders alike go to great lengths to consume and analyze data sets like satellite imagery, IoT sensors, voice recording, and traffic patterns as rapidly as possible. In finance, speed and accuracy mean money. The interconnectedness between many varieties of data (that are usually unstructured) can mean more money.

More Data Means More Complexity

It would seem that the more varieties of data that are consumed and processed, the greater the likelihood for success. But processing massive amounts of unstructured data in near real-time is a complex engineering feat due to the inherent complication of unstructured data.

News, chats, and company filings are all text formats but their structures will never match because they are free form. Even the rows and columns of spreadsheets are radically different from file to file. You can see how complex it can get when considering how much news, videos, file downloads, and texting there are that could be relevant to any one topic or hunch.

Structured data architectures of mainframes and relational databases have worked for decades with almost unlimited capacity to scale. But in addressing the unstructured data paradigm it has morphed into a complex array of storage platforms. There are just as many processing and compute engines for each type of data store—each used for different use cases.

For example, an end-to-end data pattern for a high-volume unstructured data feed might have horizontally scalable function apps (e.g., small independent snippets of code like a subroutine) to ingest and transform data into a usable structure. It then flows into a large-scale unified data analytics engine (e.g.: Apache Spark). At that point, depending on the business use case, AI engines such as TensorFlow or PyTorch are trained and retrained on that data all the while consuming significant GPU cycles.

This one data pattern can require hundreds of servers, each scalable in different ways. For example, many organizations use application virtualization platforms (e.g.: Kubernetes), to scale workloads dynamically and on-demand.

Depending on data type and business use cases, there could be hundreds if not thousands more data patterns competing for the same resources. For example, if they are all being processed via Apache Spark, any one of them could prove to be the bottleneck to another.

Life Is Not Quarterly. It Is Real Time, Anytime

Equities in particular, but the financial markets in general, are moving towards full automation in which traders are no longer involved. This has been a long, multi-decade process but with the advent of AI and the ability to process data at scale, it is no longer a fantasy but an imminent reality. The days of the star researcher at a large buy-side firm manually writing about the results of their complex spreadsheets are numbered.

Due to advances in technology and regulatory changes, it no longer makes sense for a firm to have legions of researchers. Technology can do it at scale. More importantly, it can generate research on-demand for any of thousands of equities regardless of how esoteric it might be. Imagine a world where equity research is generated based on current market conditions and immediately on demand, not waiting for quarterly updates.

As demonstrated by the GameStop/AMC drama described earlier, life and markets are inextricably linked and life is not lived quarterly. There has been a fundamental shift in the equities research business model and the organizations that succeed are the ones that address that shift better than others.

The real-time ingestion, transformation, and incorporation of corporate guidance calls, SEC documents, voice, satellite, and drone data into sophisticated research models is now required for success.

The Shift to Large-Scale Automation

For eons trading involved two human counterparties negotiating a price, and for many markets, it is still that way. But humans have a limited capacity to process new information rapidly. That is why, like equity research described above, the days of the star trader at a large sell-side firm are numbered.

The best traders rely on the wisdom gained from trading over many years to make what they think are the best decisions. Firms now incorporate that expertise into AI models and train them with decades of historical trades to make statistically-based decisions. Those decisions can be further enriched with insight developed by research algorithms utilizing unstructured data. One human trader can’t do all of that in their head. However, a collection of advanced statistical models can—with no limitation of time and place. This is a shift to large-scale and real-time automation.

Ingesting news from across the globe in different languages, translating, tagging, and executing natural language processing (NLP) models, all in real time, can give a depth of information to a model that a human could never process. Those models detect subtle signals that can lead to trade gains that would otherwise have been missed.

A Model for Success

It is not unusual for the data stories described above to be hooked together into larger ecosystems. And since financial markets are full of messy data that moves at close to the speed of light, the complex machinery that powers it needs to be observed, managed, and optimized in real time just to keep up. Any slight degradation in performance will put your firm behind the curve.

Run More Apps, Track Spend, and Manage Costs

logos pd

Many organizations have gotten good at handling these challenges, often using performance monitoring solutions designed to observe and optimize complex tooling. The competitive differentiator comes down to who does a better, faster, and more accurate job of running more apps, tracking spend, and managing CapEx and OpEx costs.

Pepperdata is an example of such a solution. However, unlike traditional performance monitoring solutions that merely summarize static data and require manual, time-consuming application-by-application tuning, Pepperdata provides 360-degree visibility into your infrastructure and applications across your big data analytics stack, whether it resides on-premises, in the cloud, or a hybrid of both.

But to provide an additional edge over your competition, Pepperdata provides machine-learned optimization recommendations at a system level, application level, and down to the data query level. This is extremely important for hyper-efficient operations across distributed and cross-functional domains.

Attain Full Visibility through Observability

Big data impacts finance, so it’s critical that platform operations and DevOps teams can collaborate with tools that automatically optimize system resources while providing a detailed, correlated understanding of each application. Pepperdata does this through observability, by using hundreds of application and infrastructure metrics collected in real time. On the cloud or in the data center, this automated approach gives you complete visibility and insight into your big data stack. It also enables you to run more applications. This is coupled with continuous tuning, recommendations, and alerting.

In financial markets, optimized time is money, so it’s a good idea to employ tools that close visibility gaps, recapture wasted resources, and maximize current infrastructure.

Take a look at the resources below to get more insight into some of the concepts described above:

Explore More

Looking for a safe, proven method to reduce waste and cost by up to 50% and maximize value for your cloud environment? Sign up now for a 30 minute free demo to see how Pepperdata Capacity Optimizer Next Gen can help you start saving immediately.