Kubernetes

Getting started with Kubernetes

Nigel Poulton
Owner @ Nigelpoulton.com

Kubernetes, or K8s, is meant to make containerization easy, but putting it to good use isn’t always the conceptual breeze it ought to be. Fortunately, would-be DevOps masters don't have to make the trek alone: This awesome Humanitec webinar gets you up to speed with help from Nigel Poulton, an established expert and renowned trainer in the field.

Don't have time to watch a whole meetup recording? These quick takeaways hit the key points to help you understand containerization without skipping the fundamentals.

But first, a quick introduction:

About the Speaker 

Nigel Poulton has written multiple best-selling books about Kubernetes and cloud-native technologies, including his regularly updated Kubernetes Book and Hands-on Quick Start Kubernetes. He's truly passionate about containerization, having been on board with Docker since its pre-1.0.0 days. All told, he's worked closely with these technologies for about a decade. 

Today, Nigel provides industry training and publishes multiple courses and videos on the topic – along with tests, quizzes, and review exams designed to replicate the real-world interview experience. His work has reached more than 1 million tech professionals and helped multiple companies expand their Docker and Kubernetes proficiency. You can learn more by checking out his website at nigelpoulton.com.

Getting started with Kubernetes – a quick recap

This talk covered the fundamentals of containers and Kubernetes, breaking the concept down into five key topics:

  1. What Is a Container?
  2. What Is Kubernetes and Why Is It Important?
  3. What Are Some Strengths and Weaknesses of Kubernetes?
  4. Kubernetes and You 
  5. Getting Started With Kubernetes

Nigel also answered a few hot audience questions along the way – Read on to get the gist.

1. What as a container?

Nigel explains containerization using two competing perspectives: The application stack view and the development pipeline view. 

The application stack view

Taking a high-level approach, the typical application stack includes three layers: The hardware, the OS that owns and manages said hardware, and the app that runs on the OS. These levels offer multiple entry points for virtualized software design.

Most devs are familiar with virtualization at the hardware level. For instance, your app might use a virtual web server that runs as an independent computing environment on a shared physical host machine. Every virtual machine (VM) on a host features abstractified representations of the hardware components found in a normal system – the hard drives, network cards, memory, and other devices. 

Containers virtualize things at the operating system level. Instead of virtual hard drives and CPUs, they rely on virtual file systems and process IDs. Although containerization is a type of virtualization, it's more about packaging a computing environment than replicating an entire computing system.

This impact of this distinction is evident in containerization's biggest benefits: Containers are typically smaller and faster than VMs. According to Nigel, you can do a whole lot more with the same hardware – A server that could run 10 VMs simultaneously might support deploying 50 containers that power the same application.

Each VM instance needs its own copy of the OS. Instances also consume CPU, RAM, and other system resources that the application can no longer use.

 

Containerization significantly shaves down the resources needed to run an app because a single operating system can host multiple containers. Each still occupies a separate slice of the available resources, but the burden per instance is far lighter. Apps also start faster for not having to boot up a fresh OS each time.

The dev pipeline view

From a development pipeline perspective, containerization starts with your application's code. Once you're ready, you build the source code to produce a container image and host it on a registry. From there, you're ready to run it in production. If you've done things correctly, the deployment even takes care of the dependencies.

2. What is Kubernetes and why is it important?

Kubernetes can be complicated – According to Nigel, using it is likely harder than it needs to be. Even though it's continually improving, Kubernetes still has a ways to go before it achieves optimal usability. 

Why invest the extra time, money, and mental effort? The main reason is that Kubernetes acts as a viable orchestrator or "operating system of the cloud".

Nigel insightfully argues that a typical application tends to be like a disorganized soccer or football team – Even if there are players with different strengths on the field, the team still needs a coach to assign them specific roles. The coach also reacts to changes, like player injuries, and makes decisions, such as switching out the team lineup on the fly in response to gameplay. 

If cloud-native applications live up to the analogy, then your microservices stand in for the players. Kubernetes, then, fulfills the role of the coach. It can decide when to start services, manage shared resources, and present users with a unified front even though the app features many distinct parts. Kubernetes can also handle complex management decision-making at runtime, such as increasing the number of containers that work together to provide a service in response to real-time demand. 

3. What are some strengths and weaknesses of Kubernetes?

Containerization is becoming the standard in modern software development. Its inherently compartmentalized workflows might partially explain why: Ditch the idea of packaging an entire app as a single monolithic service or binary artifact. Containerization usually involves packaging the individual microservices that comprise the whole.

Viewed in this light, it's easy to see how containerization with Kubernetes makes maintenance less painful. If any elements need updating or maintenance, it becomes infinitely simpler to handle those upgrades in isolation: Patches only touch the relevant code instead of the entire app. Regardless of the underlying hardware or cloud implementation behind the scenes, a Kubernetes application can usually run on top of it.

What about when you start with a monolithic app? The most effective strategy in such situations involves refactoring. Although this requires significantly more effort than simply installing an app on a VM, the payoff is far greater.

As an "OS of the cloud," Kubernetes is also rather setup-agnostic. It doesn't care whether you're running on a public, private, or hybrid cloud: It deploys and runs the code regardless of the implementation – with the notable exception that using cloud-provider-proprietary features or services can pose some migration challenges.

4. Kubernetes and you

From a business standpoint, there are ample reasons to use Kubernetes. Companies know that K8s is an in-demand career skill, and using containerization lets them surf this proverbial wave to attract leading talent. 

Best of all, getting on board leads to a desirable outcome: Future-proofed software that you can readily adapt to new technology as it becomes available. If Kubernetes isn't currently the perfect tool for your use case, the ecosystem's rapid expansion could soon make it a serious contender.

5. Getting started with Kubernetes

Kubernetes is extremely easy to get started with. Even if you're not too enthused about setting up your own service from scratch, you can find a convenient hosted option to handle most of the details. Taking your first steps in such a hosted environment and building confidence before flying solo might be ideal.

During the meetup, the attendees also asked how to get started with Kubernetes in a forward-thinking, career-minded way. Nigel's recommendation about using a hosted service also applied here, but with some important caveats. 

Rethinking the DevOps approach

First, remember that containerized software exists in distinct domains. Managing a Kubernetes deployment that's already been containerized is a different beast than engineering an app for containerization. While it pays to get acquainted with both areas, you might want to adjust your mental model somewhat.

If you come from an operations background, one thing that might trip you up is that you'll never log into a live container to fix things the way you might with a classic monolithic app. Instead, you'll correct errors at the source and redeploy. For some people, getting used to this kind of workflow can be one of the biggest hurdles. 

Improving related professional skills

Nigel also recommends mastering a programming language. Perhaps you might not go as far as earning the certs and experience needed to join a dev team in a lead role, but getting to know your way around a language imparts valuable skills. This concept is particularly relevant for those whose professional experiences involve DevOps.

Investing in formal training

Another confusing issue can be that of certifications. Containerization isn't exactly fresh out of the oven, but people are still tweaking the recipe some ten years after its big debut – So where should your search for credentials begin?

Nigel notes that the KCNA, or Kubernetes and Cloud Native Associate, exam only became available near the end of 2021. This exam covers fundamental topics like container orchestration and the fundamentals of the Kubernetes ecosystem. It also touches on cloud-native architecture, application delivery, and orchestration. According to Nigel, this entry-level cert isn't too tough, and passing it is a wise career move: Taking the KCNA sets devs up for the more specialized Certified Kubernetes Application Developer, or CKAD, certification. 

The Cloud Native Computing Foundation, or CNCF, maintains the KCNA and CKAD certification standards. Other certification alternatives may offer similar professional skills. The CNCF options, however, are vendor-neutral and potentially more enticing to employers. Combined with hands-on operations experience, getting a certification could be the ideal way to chart a path towards your Kubernetes-ready career future.