TNS
VOXPOP
404 - VOXPOP NOT FOUND
Give use a minute to figure out what's going on here ...
CI/CD / Cloud Native Ecosystem / DevOps

Doing DevOps the Cloud Native Way

Cloud native technologies change how people work. Therefore, it is important that we observe cloud native DevOps as a phenomenon.
Sep 10th, 2018 8:42am by
Featued image for: Doing DevOps the Cloud Native Way

This post is part of a series, sponsored by CloudBees, exploring the emerging concept of  ‘Cloud native DevOps.’

Cloud native DevOps is not some trend or methodology that we all need to embrace and embody before it overtakes us like a tropical storm. DevOps is about people and how they work together. Cloud native technologies change how people work. It is therefore not only pertinent but important that we observe cloud native DevOps as a phenomenon.

The Efficiency Potential

At a June 2016 Public Sector Summit conference, Amazon Web Services solutions architect Alex Corley introduced the idea of cloud native DevOps by tying it to what he described as the philosophy of continuous business improvement. Corley explained that concept as inherently redundant, saying no business is ever truly done improving itself.

“This is not just about technology. It’s about business and people,” Corley told attendees.  “To me, this is the essence of what DevOps is: We’re all participating in activities that can continuously improve all functions of business, and involving all employees. You hear about the breaking down of silos, the integration of business — kind of along that philosophy… By improving standardized activities and processes, we can become more efficient and reduce waste.”

The concept’s cloud nativity becomes more practical for businesses, the AWS developer continued, once it has been freed from the constraints of traditional, on-premises systems management. Theoretically, much of these “undifferentiated heavy lifting” work processes may be automated, in order to expedite them. But cloud native development, he asserted, advances the notion that they may be eliminated altogether.

That elimination would, in turn, radically transform the requirements of DevOps, by way of changing the definitions of what Dev teams do and what Ops teams do.

Corley’s manifestation of cloud native DevOps would arguably only apply to the cloud native portion of an organization’s IT arsenal, although that arsenal would incorporate its serverless functions as well. In a conversation with The New Stack, DXC Technology’s Chief Technology Officer for application services, JP Morgenthal, described cloud native DevOps and serverless delivery as being bundled together.  When someone refers to one, he suggested, she includes the other.

“Really, what you’re talking about at this point is a platform that handles the release management, the lifecycle management, the telemetry, instrumentation, and the security around that component,” remarked Morgenthal. “It’s really a PaaS [platform-as-a-service], but more so, the whole ideal of serverless is, I don’t have to worry about where that runs. I don’t worry about the scalability of my application running on that platform. The economics are, I’m only paying for the very limited timespan when that function is executing. It’s a very different model of computing than what we’ve been doing in the past, and frankly, why would I want to then also have to go and invest at least hundreds of thousands of dollars in setting up all of that infrastructure to do the same, exact thing?”

Morgenthal perceives the act of maintaining on-premises infrastructure as one class of “undifferentiated heavy lifting,” and there’s a good chance that Amazon’s Corley would agree. Corley’s use of the phrase harkens back to 2006, when Amazon CEO Jeff Bezos first defended his company’s concept of cloud computing at an O’Reilly conference. At the time, Bezos estimated that the amount of work expended by organizations in actually executing the core vision of their ideas, was about 30 percent.

Amazon characterizes its cloud platform as the ultimate eradicator of undifferentiated busy-work. Leveraging the public cloud as an automation tool, from Amazon’s perspective, offers an organization a kind of foundational repair — lifting up its infrastructure, cleaning out what Bezos calls the “muck,” and shifting it onto a single outsourced platform for everyone in the organization to use. Capital One developer Kapil Thangavelu, appearing in a 2017 The New Stack Makers podcast, argued that serverless architecture — and by extension, cloud nativity — refers not to the elimination or even the reduction of IT operators, but rather the concentration on “a common base platform.”

Viewed from this perspective, one could argue that, for a DevOps platform to be completely effective, it actually must be cloud native — it should be constructed and located in an environment that is, from the outset, accessible to all yet set apart from any one department’s silo or exclusive oversight.

The Packaging Predicament

In an October 2016 webinar, Capgemini Senior Technical Architect Les Frost presented an argument that boiled down to this: The broader concept of DevOps cannot be ingested by an organization incrementally. It has to be swallowed all at once, or not at all. Thus the platform on which an organization’s DevOps automation is maintained, would need to be a complete, singular mechanism — “DevOps in a Box.”

And for that reason, Frost argued, the whole thing may be a waste of time.

“If you spend the next three years implementing DevOps, will you actually be out-of-date?” asked Frost rhetorically. “Because while you’re focusing on DevOps, there’s a whole host of companies that are going to be disrupting the market. We’re all aware of these sorts of disruptors… So what you’ve got to ask yourself is, should we be spending our time on DevOps, or should we be spending our time on getting business-critical functionality out as quickly as we can?”

Frost is certainly advocating a counter-intuitive idea. Frost went on to present microservices as one example of a technology that addresses the critical need to produce business functions in shorter, easier to implement, easier to test iterations. “These things that are happening in the IT market are all about getting stuff out quickly,” he said. “And the reason people want to get stuff out quickly is that there are other people who will get stuff out quickly if you don’t. So in that market, is it right to be spending all your time on DevOps?”

Since microservices architecture is a means to expedite software delivery in shorter cycles, they’d tell you, it’s actually an implementation of DevOps. As The New Stack’s Alex Handy wrote in April 2018, “In the microservices world… it’s generally DevOps’ duty to set up all of the infrastructure required to build out at-scale environments. That means Web application servers, registries and repositories, OS and container images, virtualized networking, firewalls, load balancers, message queues, and reverse proxies.”

That list sounds dangerously close to a recipe for what some would call “undifferentiated heavy lifting.” And this is at the heart of Capgemini’s counter-argument: Software development is evolving toward the ability to produce a function or service based almost solely upon intent, without regard to the requirements of its infrastructure. It’s a methodology that is, by design, more Dev and less Ops. Indeed, it may be the separation of underlying functions which makes the serverless approach, and to some extent the microservices approach, valuable and viable. Why bother, the counter-argument asks, investing in the time and effort needed to integrate tasks that are no longer relevant?

One of the dangers of the microservices approach, if we take this train of thought to its extreme, is that it could frame the entire scenario of an enterprise’s software from the exclusive perspective of the developer. Since cloud native platforms are marketed towards developers’ interests, the result is that, at the minimum, Ops professionals could feel left out. And at the maximum, they could be left out.

“The beautiful thing about being a software developer is, everything that I’m reaching out towards is controllable by code in some form or fashion,” said CloudBees Jenkins Community Evangelist R. Tyler Croy, in an interview with The New Stack. “From the operations standpoint, that’s not the case.

“In a traditional data center environment,” Croy continued, “I don’t have an API to go provision a new purchase order for a rack of Dell machines; I’ve got to go through manual processes, fill out a spreadsheet, and some actual person has to come rack it, burn it in, provision it, and then I can use that piece of infrastructure. From the operator world, they’re looking at things where I’ve got to mix actual, human, real-world, manual processes with my automation. And developers are tending to look at everything as software, so they can automate everything.”

Developers automating everything is not the point of DevOps. And if a cloud native platform promotes automation as code, then it could feasibly enable developers to automate certain operator roles completely out of the picture.

Electing a Conductor

The entirety of an organization’s business model must encompass not just the automation of everyday business functions, believes CloudBees’ Croy, but also the creation and nurturing of ideas generated by the human beings performing those functions. Any cloud native DevOps platform would have to include idea creation, if indeed its intention is for that idea to take root in the cloud. Thus it would be the ability for people to devise ideas and to innovate that could not only preserve their jobs but bring them into a closer-knit loop.

What business process engineers through history either fail to understand or have chosen to avoid mentioning, remarked Croy, is the reality that many ideas are bad. Relatively few actually mature all the way to the deployment phase. Yet this is what business methodologies such as Agile do try to take into account: Good products and services often develop from under the carcasses of bad ideas. It’s only through the implementation process that many ideas may be called out as bad.

For a DevOps platform to deliver on one of its practitioners’ stated goals of encouraging innovation, he went on, it must provide support for failure. That’s not to say it should prune failed processes from the tree of development, but rather suspend the development of processes that are currently failing, until such time as circumstances may have changed, and one or more bad ideas may converge into, or otherwise catalyze, one good one.

But by “platform” in this context, are we referring to a single application, like the business process management (BPM) environments of ancient times (the 1990s)? Or a multiplicity of tools interfacing through plug-ins and APIs — a kind of, to coin a phrase, stack?

“I think this problem has to be addressed by multiple tools working in concert,” Croy responded. “Where we get into questions of religious affiliation with tools, or appreciation of one approach over another, I think there does need to be a conductor… The big question that is hard to answer impartially is, where should that overall problem statement — ‘problem identified’ to ‘problem solved’ — be modeled? Who’s the conductor of the orchestra?”

That sounds like the classic question of which tool lies at the center of the ecosystem — a question asked quite often about Docker in recent months, and not in a good way.  Certainly, the various open source vendors will agree to disagree on which tool gets to play conductor. Yet even if the role of conductor remains for each organization to designate for itself, will it matter whether or not that conductor is “cloud native?”

“I don’t think it matters,” he answered. “I’m comfortable stating this now: There is not a single, successful business in the world right now that’s not relying on cloud native technologies in some form… To me, that ship has sailed. I think the idea of owning everything is now passé, from a corporate standpoint.”

If “the cloud” has evolved into a virtual staging ground for digital functionality, encompassing all infrastructure that raises virtualization to that same level, then for DevOps to be “cloud native” may already be as meaningful as for a society to be “world-native.” In just the time it took us to examine the idea, it already became archaic. Of course, that’s exactly the outcome that Capgemini’s Les Frost warned would happen.

Title image of the first weeks of construction of the Golden Gate Bridge in San Francisco, circa 1934, from the U.S. Library of Congress, in the public domain.

Group Created with Sketch.
TNS owner Insight Partners is an investor in: Pragma, Control, Docker.
TNS DAILY NEWSLETTER Receive a free roundup of the most recent TNS articles in your inbox each day.