top of page

The Computing Pendulum and Private Cloud

The US market for private cloud has taken a turn for the worse in recent years, to put it mildly. The dream of scalable and resilient services accessible to all seems to be realized strictly in public cloud(s) or enterprise-grade environments squarely residing at the top of their respective food chain. As a result, security and operational assurance becomes the ephemeral responsibility of a large company which cannot be held liable, or the upper corporate hierarchy which may not take kindly to questions about security practices at all (under the belief that "what they dont know, can't hurt them"). Others have written their opinions on/excuses for the failure of adoption downmarket, often citing some immense technical hurdles or ecosystem semantics of which no actual prospective customer is remotely aware as the reason for failure. The reasons were plenty, many of them non-technical, but the goal of Open Source cloud was achieved in the fragmented reality of the Big Tent of OpenStack services - anyone can build and maintain clouds if they put their mind to it, really put their mind to it.

The ecosystem has all of the services one would want and more, and its trivial to create new ones directly in the control and execution planes. The controllers and hypervisors can be built with operational and security guarantees, storage and networking can be hardened like a fortress due to the very limited access requirements, and each instance or workload can be carefully measured to ensure performance and integrity. Early adopters and well funded organizations have access to talent needed to build all of this magic using more magic involving bare metal provisioning and devops workflows (n-scale techniques). Smaller organizations however don't have a good jumping-off point into the tech, and have read the doomsday warnings in technical publications claiming failure of the private cloud as a concept on the grounds that financial backers pulled out (some of them having met their objective milestones years ahead of schedule). Fearing these technologies, they pay more money to use an arguably "well known" hypervisor solution because the only talent they could find had a certificate from the company who made the tool; stating that this individual can identify the red button from the green one.

Despite the numerous headlines proclaiming the death of OpenStack when individual firms pulled their resource pools away from the main effort, the reality is that it was efforts to promote it as downmarket solution to compete with packaged commercial hypervisors which failed, not the technology itself. Those efforts failed on a marketing and support level, bringing up clouds from bare metal actually works quite well (even if you're a 17h flight away). OpenStack is very much alive in parts of the world which are growing infrastructure quickly and need a reliable and predictable base, and places where it established foothold and found success when it "was still hot." Here's the key point in all of this: you dont hear about OpenStack for the same reason you dont hear much about X company switches, or Y company SANs - it is infrastructure which does its job correctly and is therefore invisible to the human eye. Infrastructure is only noticed when someone cuts the ribbon, and when it breaks, so not hearing about infrastructure already running a fair chunk of the world is a good thing.

Another good thing is the level of control offered by private cloud deployments - the builders determine everything from metal and chipset to kernel and execution/control plane, creating defensive attributes far more complex to sort out than the known quotients of public clouds, and operating environments precisely tuned to the business logic covering the cost. The ability to embed telemetry collection and processing down to the bare metal and up into the workload provides a much better view of operational context/requirements, as well as an actual perspective into the security context of the entire stack, not just the pieces the vendor determines you're allowed to see. Take your performance monitoring tools of choice (with relevant stack integration), add host and network intrusion detection, kernel and userspace defenses, and roll it into a dashboard showing you exactly what is happening at each tier of your private cloud. Sure beats a web dashboard with a smiley face telling you that "really, everything's OK."

As the realities of hardware side-channel attacks sink into the collective consciousness of decision makers, some will inevitably realize that this paradigm for compromise from the substrata of their operational base is still present, and now has a thousand curious eyes combing the details to gain an advantage (rowhammer has been around for years, and spectre was known since the mid 90's). This is already the case with upmarket entities relying on trade secrets to protect their business model - the very same ones who run private clouds themselves, or maintain reliance on pure hypervisor solutions they can control and measure without the overarching resource and workload coordination. Companies building valuable analytical models (AI, so to speak), proprietary code/content, retaining protected data and having to answer for it, and then everyone else, will have to rethink which services are acceptable and desirable in public infrastructure, and which ones have access to things they would rather not lose.

The history of computer science is a sine curve oscillating between centralized and distributed computing models. Processing efficiency, data locality, and the security context of resource and data determine the position of this pendulum of computing paradigms. One of the drivers for public cloud has been the lack of talent to support internal systems, cloud or otherwise. As the economic pressures around data privacy ramp up, and breaches where responsibility ends at the client of the cloud provider (because they could trace the breach no further) go public, industry will have to shift to a more balanced storage and execution model to avoid the massive fines seemingly about to rain down from pending privacy regulations. The closed-source solutions which have potential licensing concerns are also not going to cut it - gaining a Mettle shell on ESXi does not produce the audit logs you would see from a HIDS-observed Nova compute host running Alpine or ASL. This leaves Open Source hypervisors and some way to manage them... which exist in several flavors, OpenStack being the best known.

When interest in this technology picks up enough to be heard at social events attended by investors in Silicon Valley or New York, someone will run the market analysis, figure out that the missing piece is this ability to provide turn-key deployment and management to people who can't hire the aforementioned talent, and invest a bit of money into taking over a market segment which wont know it exists until the rain of fines begins. Modern investors use data-driven crystal balls, and in them they will see this market opportunity with the solution already 98% built, just requiring the proper packaging. Investors and opportunity lead to an innevitable conclusion: you've not read your last headline about private cloud in the US.

Single Post: Blog_Single_Post_Widget
bottom of page