Extract: Cloud Native Transformation: Practical Patterns for Innovation: Understanding the principles

Jamie Dobson is co-founder and CEO of Container Solutions, a professional services company that specialises in cloud native transformation. With clients like Shell, Adidas, and other large enterprises, CS helps organisations navigate not only technology solutions but also adapt their internal culture and set business strategy. Jamie is the co-author of the new book Cloud Native Transformation: Practical Patterns for Innovation, (O'Reilly Media, 2020). A veteran software engineer, he specialises in leadership and organisational strategy, and is a frequent presenter at conferences. Pini Reznik is co-founder and chief technology officer of Container Solutions. Starting as a developer more than 20 years ago and moving through technical, managerial, and consulting positions in configuration management and operations areas, Pini acquired deep understanding of the software delivery processes. His company helps organisations in Europe and North America improve their value to customers by modernising their software delivery pipeline. Michelle Gienow, senior content marketing manager at Gatsby, is a former content team lead at Container Solutions. She’s a web developer and JAMstack advocate with a passion for serverless architecture.


Editor’s note: This is an exclusive extract from Cloud Native Transformation: Practical Patterns for Innovation, by Jamie Dobson, Pini Reznik and Michelle Gienow, published by O’Reilly Media. The book is currently available for free for a limited period of time. You can find out more here.


Cloud native is a lot to wrap your head around: it’s an architecture, a tech stack, an approach to software development and delivery, and a complete paradigm shift all rolled into one! To confuse things even more, cloud native implementations vary widely between enterprises thanks to the sheer number of tools available as well as the complexity they offer. However, simply understanding these five fundamental principles of cloud native architecture—and, even more importantly, how they interrelate and support each other—gives you the keys to the cloud native kingdom, no matter how complicated it gets.

To reiterate, the five principles consist of:

  • Containerisation
  • Dynamic management
  • Microservices
  • Automation
  • Orchestration

Containerisation

Once you’ve defined your service-based architecture, it only makes sense (for just about everybody, everywhere) to containerise things. Containers are lightweight, standalone executable software packages that include everything required to run an application: code, runtime, system tools, libraries, and settings. They are a sort of “standard unit” of software that packages up the code with all of its dependencies so it can run anywhere, in any computing environment. You can link containers together, set security policies, limit resource usage, and more.

Think of them as scalable and isolated virtual machines in which you run your applications. (We know this statement has historically launched a thousand flame wars, so let’s at least agree that containers are simply much faster, okay?). Containers isolate an application and its dependencies, even its own operating system, into a self-contained unit that can run on any platform, anywhere. This means you can host and deploy duplicate containers worldwide (thanks to your infrastructure as a service!) so your operations are flexible, reliable, and fast.

Dynamic management

This is where your new system absolutely shines. In short, dynamic management means making optimum use of the benefits conferred by your new cloud platform.
Compute, network, and storage resources are provisioned on-demand, using standardised APIs, without up-front costs—and in real-time response to real business needs.

Dynamic management takes away the costs typically involved in capacity planning and provisioning of hardware resources. Instead, a team of engineers can start deploying value to production in a matter of hours. Resources can also be deallocated just as quickly, closely mirroring changes in customer demand.

Operating compute, network, and storage resources is traditionally a difficult task that requires specialised skills. Obtaining these skills is often time-consuming and expensive. Even more important, though, is speed: humans are never going to be able to respond as quickly to cycle up and down as demand surges and sinks. Letting your chosen cloud platform run things dynamically means resource life cycles get managed automatically and according to unwaveringly high availability, reliability, and security standards.

Microservices

Microservices (microservice architecture) are an approach to application development in which a large application is built as a suite of modular components or services. Each service runs a unique process and often manages its own database. A service can generate alerts, log data, support UIs and authentication, and perform various other tasks. Microservices communicate via APIs and enable each service to be isolated, rebuilt, redeployed, and managed independently.

They also enable development teams to take a more decentralised (non-hierarchical) and cross-functional approach to building software. By using microservices to break up a monolithic entity into smaller distinct pieces, each team can own one piece of the process and deliver it independently. Ideally, some of these parts can even be acquired as an on-demand *-as-a-Service from the cloud.

Think about the companies setting the bar for everyone else in terms of performance, availability, and user experience: Netflix, Amazon, the instant messaging platform WhatsApp, the customer-relationship management application Salesforce, even Google’s core search application. Each of these systems requires everything from login functionality, user profiles, recommendation engines, personalisation, relational databases, object databases, content delivery networks, and numerous other components all served up cohesively to the user. By breaking all this functionality into modular pieces and delivering each service separately and independently, you increase agility.

Each microservice can be written in the most appropriate language for its particular purpose, managed by its own dedicated team, and scaled up or down independently as needed. And, unlike in a tightly coupled monolithic application, the blast radius from any change is contained within that microservice’s footprint.

Automation

Manual tasks are replaced with automated steps in scripts or code. Examples are automated test frameworks, configuration management, continuous integration, and continuous deployment tools. Automation improves the reliability of the system by limiting human errors in repetitive tasks and operationally intensive procedures. In turn, this frees up people and resources to focus on the core business instead of endless maintenance tasks.

Simply put, if you are trying to go cloud native but don’t have automation, then you are rapidly going to get yourself in a mess. Enterprises come to the cloud to deploy more quickly and frequently. If you haven’t fully automated your deployment processes, then suddenly your Ops staffers are spending all that time they save by no longer managing those on-premises servers to instead manually deploy your new, expedited production cycle. More frequent deployments also mean more opportunities to screw up every week; putting things into production faster and scaling them faster also means generating bugs faster. Automated deployment takes the grunt work out of constant implementation, while automated testing finds problems before they become crises.

Orchestration

Once a microservices architecture is in place and containerised, it is time to orchestrate the pieces. A true enterprise-level application will span multiple containers, which must be deployed across multiple server hosts that form a comprehensive container infrastructure, including security, networking, storage, and other services. An orchestration engine deploys the containers, at scale, for required workloads while scheduling them across a cluster while scaling and maintaining them—all the while integrating everything with the container infrastructure.

Orchestration encourages the use of common patterns when planning the architecture of services and applications, which both improves reliability and reduces engineering efforts. Developers are freed from solving lower-level abstractions and get to focus on the application’s overall architecture.

This is where Kubernetes comes in, and it is one of the very last things to be done in a cloud native migration. If you implement an orchestrator first, you are fighting a battle on simultaneous fronts. Using an orchestrator effectively is a highly complex endeavor; getting that right often depends on the flexibility, speed, and ability to iterate you have put in place first. Other cloud native principles—cloud infrastructure/dynamic management and automation—must be in place first. Quite often, when experts are called in to work with a company whose cloud migration has gone wrong, what we find is that they have put in an orchestrator before things were otherwise in place.

Use your platform to build your platform—before you start worrying about orchestrating all the pieces!


About the authors: Jamie Dobson, co-founder and CEO of Container Solutions (left), a professional services company that specialises in Cloud Native transformation. With clients like Shell, Adidas, and other large enterprises, CS helps organisations navigate not only technology solutions but also adapt their internal culture and set business strategy. Jamie is the co-author of the new book Cloud Native Transformation: Practical Patterns for Innovation, (O’Reilly Media, 2020). A veteran software engineer, he specialises in leadership and organisational strategy, and is a frequent presenter at conferences.

Pini Reznik, Co-founder and Chief Technology Officer of Container Solutions (right). Starting as a developer more than 20 years ago and moving through technical, managerial, and consulting positions in configuration management and operations areas, Pini acquired deep understanding of the software delivery processes. His company helps organisations in Europe and North America improve their value to customers by modernising their software delivery pipeline.

Michelle Gienow, senior content marketing manager at Gatsby, is a former content team lead at Container Solutions. She’s a web developer and JAMstack advocate with a passion for serverless architecture.

Want to find out more about topics like this from industry thought leaders? The Cloud Transformation Congress, taking place on 13 July 2021, is a virtual event and conference focusing on how to enable digital transformation with the power of cloud.

View Comments
Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *