“The cloud” is a marketing term for “someone else’s computers.”
“The cloud” is in many ways the latest term the tech industry came up with to describe “someone else’s computers.” It may be less science fiction-y than “cyberspace,” more specific than “the Internet” and less cringeworthy than “the information superhighway;” but cloud computing is still at its core the practice of using processors and hard drives someplace else to do what you want to do.
That’s not to disparage cloud computing or to diminish its usefulness – after all, that whole “Internet” thing turned out to be pretty valuable – but it’s an important mindset for organizations and developers to have when they’re considering what functions and processes should be put on the cloud, and when they’re determining what sort of cloud architecture to implement.
“Deleted” data often isn’t.
This is a critical truth that ties into the “someone else’s computers” entry above. It’s been true since the earliest days of rewritable storage media that “deleted” data often wasn’t truly gone, but rather just hidden or forgotten by the filesystem until someone went looking for it.
The growth of the computer forensics industry and spread of data privacy laws, among other factors, made companies take more care to delete information in such a way as to truly remove it from their networks. However, the growth of cloud computing – combined with the rise of disaster recovery backup services – makes it far harder than ever before to be sure that data can’t come back from the grave. When you don’t know which storage drive – or drives – have held the information you placed in the cloud, there’s no way to be sure that the data your users thought they’d deleted in accordance with policies or regulatory requirements doesn’t live on somewhere beyond your reach.
Many “cloud native” applications really aren’t.
The invention and spread of cloud computing didn’t magically reprogram the entire software development industry – including the brains of developers around the world – away from the client-server world and into a microservices, container-based environment. Many applications developed with the cloud in mind are still either partially or totally based in a pre-cloud architecture model.
It’s a similar evolution to the one we saw when mainframe developers and applications transitioned to the client-server PC model, and then again when that model was exposed to the Internet and persistent networking. While the applications may not be legacy to the old model, all too often the mindset – and skillset – of the developers are biased toward the models with which they’ve been familiar for years.
Public cloud isn’t always cheaper
In most discussions of whether to work in a public, private or hybrid cloud architecture, one of the standard factors in the “public” column is cost. In general, public cloud deployments are cheaper than other options, simply thanks to economies of scale.
However, that’s not always true. Depending on what resources an organization already has in place – and what it’s trying to accomplish – private cloud or a hybrid public-private cloud model might result in a lower total cost of ownership.
Ironically, one factor that may make private or hybrid cloud more cost-effective for some services is the transitioning of other applications out of existing corporate data centers and onto the public cloud. Unused legacy data center capacity can often already have significant portions of the cost baked in to ROI calculations, meaning that especially when it comes to short-lifecycle services or other limited deployments, it’s more cost-effective for a company to use what it already has rather than renting someone else’s servers.
Private, hybrid and public cloud ROI can also fluctuate dramatically when it comes to data privacy, regulatory compliance and other factors where security is a concern and ensuring it on the public cloud imposes additional costs over and above existing threats.
Launching something is easy. Keeping it up is the challenge.
Just ask any pilot: The hard part isn’t getting off the ground; it’s getting to where you want to go and then landing in one piece.
The same is true for the cloud application lifecycle: It’s relatively simple to launch an application in a cloud environment. The real challenge is ensuring that it’s constantly using the right resources, correctly doing its job, reliably reporting on errors and bugs and interacting correctly with APIs and other services.
Cloud application lifecycles require agile DevOps processes that are tuned for continuous improvement, integration and delivery throughout the application’s entire lifespan. Especially when it comes to microservice environments, launching an application without a plan for near-constant assessment and management as necessary is no more of a best practice than a pilot settling in for a nap as soon as the plane leaves the runway on takeoff.
Interested in learning more about the secrets to cloud computing success? Contact us today.