Sourcemation - the Single Source of Container Images

Return to blog index

When planning to implement containerization in an organization, we often choose to use Kubernetes. This choice is fully justified because Kubernetes is a well-established platform in the market, available under an open-source license, valued for its popularity, flexibility, security, and ease of management. Such an impression may arise after studying many materials about the benefits of containerization using Kubernetes. However, it is worth checking whether these positive features are truly fully aligned with reality. Let’s start with the basics: What is Kubernetes, and what underpins its popularity? When is it worth considering Rancher or OpenShift?

Kubernetes is an open-source platform for managing and automating containerized applications. Its origins date back to 2014, when Google released the first version. Since then, Kubernetes has undergone numerous modifications, with many of its features standardized. Although Kubernetes has become synonymous with containerization, it’s important to remember that it is not a standalone containerization tool. To understand how it works, it’s useful to explain what a container is and what tasks an orchestrator performs.

How Did Containerization Come About?

Let’s imagine that the creators of containerization had the following concept in mind: What if we took one of the more popular Linux distributions, on which we want to run an application, and removed all unnecessary elements, leaving only those essential to running our application? If we further configured the software to automatically launch the required components and recover from a previously created image in case of failure, we would have a lightweight, efficient virtual machine requiring minimal resources and nearly maintenance-free. In case of problems, it would be enough to delete and recreate it. Voilà! We have containerization. However, this model has its limitations. Data stored on a virtual machine launched from an image is non-persistent. Therefore, applications running inside must be stateless and use an external database or storage, and any changes in system configuration require creating a new image. Although this is a simplified vision of containers, it illustrates the basic assumptions and shows some differences from traditional virtual machines, although similarities are undeniably noticeable.

What Exactly is a Container?

A container is a platform that includes the components necessary to run a program (e.g., configuration files or libraries). It is usually based on Linux operating systems, which are often free, eliminating licensing costs. Containers operate in isolation and are unaware of the existence of other containers. However, they can communicate with each other and the host via the network, and this communication is managed and controlled. A virtual machine can act as a host for a container, but the container itself is not considered a virtual machine and cannot act as a host for other containers. Containers cannot be nested. Typically, each container runs one application, although there is some flexibility in this regard, depending on the container’s contents. Some assumptions suggest that the container should be as simple as possible. In the traditional model, an application is a single unit, while in the container approach, it is divided into components consisting of loosely coupled services. This approach allows for the independent development of microservices without the need to interfere with the rest of the application’s components. One can imagine that microservices are spread across traditional virtual machines, but such a solution is less flexible, less efficient, and more challenging to manage, especially in the context of large and complex applications. Containerization solves many problems encountered when using virtual machines.

Kubernetes and Containerization

What role does Kubernetes play in the context of containerization? When deploying a microservices-based application, we most likely want each service to run in a separate container. However, we will notice differences in functionality and resources between microservices. We probably won’t have the option of vertical scaling of the container. Performance issues will need to be addressed by horizontal scaling, i.e., adding more container instances managed through a manifest. This is somewhat similar to creating an application cluster known from traditional solutions (WebLogic or JBoss). Docker, the most commonly used open-source containerization tool, is responsible for creating and managing containers. However, Docker alone does not handle, for example, auto-scaling of pods, self-healing, or load balancing, which requires additional software, such as Docker Swarm. However, while Docker Swarm enhances Docker’s functionality and flexibility, it is not a full-fledged orchestrator but rather an extension of the containerization tool. Kubernetes significantly surpasses Docker Swarm in this regard. It is a comprehensive tool that, while it can use Docker as a container engine, is not limited to it and offers more advanced features such as scalability, high availability, and automation.

Is Kubernetes an Ideal Solution?

Although Kubernetes is a powerful tool that overcomes the limitations of Docker or even Swarm, it has its limitations, such as the lack of a GUI, CI/CD support, full access control, or a routable network. Therefore, it is worth considering alternative solutions, such as Rancher or OpenShift, which may offer a more comprehensive approach to container management. Configuring Kubernetes reminds me somewhat of the early days of Linux, when you had to do a lot of work yourself to use it as a full-fledged operating system. The approach to IT specialization has changed, however—now we expect ready-made products rather than striving for broad knowledge and the ability to configure everything ourselves. Modern all-in-one solutions, like contemporary Linux distributions, offer safe, compatible, and configurable systems. Of course, there are many people who love tinkering with files, configuring, and overseeing every element. However, for most, it’s important that the solution simply works. Kubernetes, while functional, requires significant effort and work to configure it optimally. Therefore, it is worth considering ready-made tools that can simplify management and speed up deployment.

Rancher vs. OpenShift

Rancher, created in 2016 and acquired by SUSE in 2020, is an open-source platform for managing multiple Kubernetes clusters. It offers a consistent interface for managing clusters, and its installation is relatively simple, quick, and consistent, although it requires attention. OpenShift, developed by Red Hat, is a closed platform that adds additional layers of management, security, and automation to Kubernetes. While it offers full support and integration with other Red Hat products, its installation is more complex and requires specialized systems and appropriate resources. Both platforms, Rancher and OpenShift, offer simple and consistent user interfaces and the classic RBAC (Role-Based Access Control) system for managing access. Rancher supports projects, which are parent folders for namespaces, specific to this platform.

CLI or GUI?

Microservices-focused containerization introduces many tools that enable the management of Kubernetes applications and clusters. From the Kubernetes CLI (Command Line Interface), we have full control over installations, logs, events, and application objects. This approach is well-known and widely used in practice. However, the advantage of the discussed platforms lies in their GUI (Graphical User Interface). It provides excellent control over deployments, their dependencies, and the objects that make them up. Both systems work similarly but differ in the location of certain elements and, of course, the interface itself. However, access to functions is intuitive and straightforward in both cases. Although GUI is often criticized by administrators and not commonly used, it has its advantages, especially in managing large clusters, where browsing through multiple namespaces, deployments, and PVs can be time-consuming and complicated. In such situations, the GUI facilitates management and increases efficiency.

Managing Deployments

Rancher and OpenShift provide advanced tools that simplify deployment management. Both systems allow controlling basic Kubernetes objects, such as deployments, services, secrets, external storage, or CRDs (Custom Resource Definitions). This can be done using a parameter editing form, editing definitions, or manifests in YAML files. These tools also offer wizards that make it easier to quickly create, edit, and import complete files. For larger deployments, wizards are rarely used. Helm Charts and ready-made YAML files, often supported by CI/CD processes, are more commonly used. The usefulness of the GUI in this context is evident, especially during testing or changing application parameters, where it’s easier to preview configurations than analyze files.

Monitoring

Monitoring is crucial in managing applications. Both Rancher and OpenShift offer monitoring tools, although they are not as advanced as external solutions like Prometheus or Grafana, which can, of course, be installed separately. Both systems enable monitoring of applications as well as cluster and network resource utilization, which is sufficient for everyday use.

Network Support

Kubernetes does not natively support routable networks and requires the installation of an additional plugin, such as Flannel or Calico. Rancher and OpenShift handle networks in different ways:

  • Rancher: During cluster configuration, the user can choose a network plugin from predefined options (Cilium, Weave, Flannel, Calico, Canal) or use their own. Configuration usually takes place automatically, with the option to customize settings, such as IP range, network policies, or MTU size.
  • OpenShift: Uses networking solution, OVN Kubernets. However, it also allows the use of external CNI plugins.

Both solutions offer well-prepared networking tools that are sufficient unless advanced configuration is needed.

GitOps – Continuous Integration and Continuous Delivery

GitOps is a modern method of managing applications and infrastructure, based on declarative code and using Git repositories as the source control system. It combines DevOps practices with the capabilities of managing infrastructure as code and applications deployed through Helm Charts. Rancher uses the Fleet system based on Helm Charts. Fleet is a proprietary solution by SUSE, used exclusively in Rancher, which allows efficient management of multi-cluster environments. Helm Charts serve as the engine for deploying resources in the cluster managed by Rancher. OpenShift uses the external Argo CD application as a declarative GitOps engine. It is less hermetic and more universal. It enables consistent configuration and deployment of infrastructure and applications in Kubernetes clusters. It is an open-source solution with a large community and knowledge base.

Rancher vs. OpenShift — Which to Choose?

The article aimed to present both solutions in the context of Kubernetes rather than directly comparing them. Both systems have their unique features: Rancher offers a cluster manager that allows for managing both its own clusters (k3s, RKE2, RKE) and external clusters. It provides great flexibility and openness. Rancher can be installed on various Linux distributions. It is an open-source solution with an available, functionally similar but supported, paid version called Rancher Prime. OpenShift is a comprehensive container platform based on Kubernetes, offering additional tools and enhanced security features. It is more stringent regarding hardware and licensing requirements. OpenShift is a commercial product with an available community version, OKD. While they are functionally similar, there are notable differences, such as the use of CoreOS. The free version is based on community-supported Fedora, while the core of the paid version is commercial Red Hat Enterprise Linux. The choice depends on the available resources, preferences for openness, and budget. Both systems elevate Kubernetes to a higher level, providing comprehensive environments without the need for the tedious work of integrating tools.