What is Scalability in Cloud Computing

What is Scalability in Cloud Computing

Optimized Time Is Money

“Time is money” has always been a well-worn adage in the financial markets. However, in a world where massive volumes of business are transacted in milliseconds, “optimized time is money” seems more appropriate. The enhanced adage conveys expectations that go well beyond system uptime (enabling business) or downtime (stopping business).

For business executives, the technical details of your organization’s big data and analytics infrastructure is a rabbit hole too deep. However, awareness of how the machinery that powers your client-facing and operational platforms are managed and maintained is important. That is because a slight degradation in performance will have direct implications on business.

In today’s hyper-accelerated and interconnected markets, SLAs can no longer be binary. A 99.999% uptime requirement doesn’t account for what could and does happen at any moment in time therein. It also doesn’t address how disparate systems must function as a fully optimized and integrated set of services.

Digital Complexities within Capital Markets

In our earlier blog post, “Digital Transformation in Banking,” we wrote a lot about the need to optimize the machinery that powers digital transformation. That post is meant to help business executives have constructive conversations with their platform operations and DevOps teams concerning what, how, and why certain infrastructure challenges need to be met.

To help connect the dots, we’ve provided four “data stories” below. Each is an example of the digital and data complexity we regularly see in capital markets, using alternative data, unstructured data, equities research, and trade automation as examples. It also provides context into the criticality of having fully optimized machines to capture, process, present, and act on that data.

Data Drives Markets

The recent and wild market swings of GameStop and AMC stocks brought us a brief respite from Covid-19 news. The related controversies gave us plenty to read and re-tweet, and it also made evident just how intertwined the financial markets are with social media. For example, the House Financial Services Committee Congressional hearings brought together in one virtual room, three very disparate actors of this drama: an incumbent hedge fund (Citadel), an upstart online brokerage firm (RobinHood), and a social media platform (Reddit).

Controversies aside, this is a great example of data driving markets.

We live in a world that is drowning in all sorts of data. In the GameStop/AMC scenario, the data came from online chat rooms and electronic bulletin boards. But there are plenty of other alternative non-financial types of data that drive markets. Hedge funds, global banks, and day-traders alike go to great lengths to consume and analyze data sets like satellite imagery, IoT sensors, voice recording, and traffic patterns as rapidly as possible. In finance, speed and accuracy mean money. The interconnectedness between many varieties of data (that are usually unstructured) can mean more money.

More Data Means More Complexity

It would seem that the more varieties of data that are consumed and processed, the greater the likelihood for success. But processing massive amounts of unstructured data in near real-time is a complex engineering feat due to the inherent complication of unstructured data.

News, chats, and company filings are all text formats but their structures will never match because they are free form. Even the rows and columns of spreadsheets are radically different from file to file. You can see how complex it can get when considering how much news, videos, file downloads, and texting there are that could be relevant to any one topic or hunch.

Structured data architectures of mainframes and relational databases have worked for decades with almost unlimited capacity to scale. But in addressing the unstructured data paradigm it has morphed into a complex array of storage platforms. There are just as many processing and compute engines for each type of data store—each used for different use cases.

For example, an end-to-end data pattern for a high-volume unstructured data feed might have horizontally scalable function apps (e.g., small independent snipp