Kubernetes in public clouds and Maestro

Containers have changed our attitude towards software development, deployment, and maintenance.

Various application-making services are packed in separate containers and deployed in a cluster of physical or virtual machines. This solution takes less resources and provides greater flexibility. What it needs is the orchestration, a tool that will automate the procedures for deploying, managing, scaling, and networking containers.

Kubernetes is one of the most popular containerization solutions. It was created to make your application deployment easy, manageable, and automated. Founded by Google and now maintained by the Cloud Native Computing Foundation (CNCF), Kubernetes is an open source container orchestration engine for deploying and managing multi-container applications at scale.

Kubernetes simplifies the orchestration of complex container architectures by providing generally applicable features:


Why to choose Kubernetes

With Kubernetes, you can run isolated applications; run different versions of the same application in different containers; get the container of the desired configuration easily; scale quickly when you need it.

There are two ways to run Kubernetes – deploy a Kubernetes cluster by running instances by yourself or use services for managing Kubernetes containers that are provided by all leading cloud providers.

In this article, we give a brief overview of the Kubernetes-related services provided by AWS, Azure, and Google and describe the unified way of working with Kubernetes implemented by the Maestro team.

Kubernetes solutions in public clouds

Each cloud provider offers its own set of services and tools intended for securely deploying and working with Kubernetes.

AWS services for Kubernetes

AWS offers a few container-management tools and services that give a secure place to store, manage, and control images and containers run and provide flexible compute engines to power your containers.

Amazon Elastic Kubernetes Service and AWS Fargate are two main AWS solutions for working with containers. Besides, you can use EC2 to run containers within your virtual infrastructure with the full control over their configuration and scaling.

Amazon Elastic Kubernetes Service (Amazon EKS) is a fully-managed Kubernetes service that is distinguished by its security, reliability, and scalability. EKS can be used with AWS Fargate to save your money since you can pay only for application resources. Deep integration with such services as Amazon CloudWatch, Auto Scaling Groups, AWS Identity and Access Management (IAM), and Amazon Virtual Private Cloud (VPC) allow monitoring, scaling, and load-balancing your applications while integration with AWS App Mesh gives access to service mesh features and provides observability, traffic control, and security.

AWS Fargate is a serverless compute engine for containers that works with Amazon Elastic Kubernetes Service. Fargate simplifies the development of your applications and improves their security as the applications are isolated by design. Besides, Fargate creates isolated compute environments for tasks and pods by running each task or pod in its own kernel. Fargate is a cost-effective solution because it bills only resources required to run your containers and does not charge money for over-provisioning and for additional servers.

Other container management tools provided by AWS including Amazon Elastic Container Registry (ECR) and Amazon Elastic Container Service (Amazon ECS) are listed on their official website.

Why to choose AWS for Kubernetes

AWS provides easy and transparent container-related services for users who prefer AWS technologies.

Azure services for Kubernetes

Microsoft Azure provides an extended set of services for managing containers.

Azure Kubernetes Service (AKS) offers a serverless Kubernetes service that unites a profound CI/CD experience with the enterprise-grade security and governance. AKS is positioned as a single platform that will unite development and operations teams for rapidly building, delivering, and scaling applications. Its main features include elastic capacity provisioning, advanced identity and access management, and enforced dynamic rules across multiple clusters, integration with Visual Studio Code Kubernetes tools, Azure DevOps, and Azure Monitor.

Other container-related services include:

  • App Service allows quickly building, deploying, and scaling web and mobile applications. Support .NET, .NET Core, Node.js, Java, Python or PHP, in containers or running on Windows or Linux.
  • Container Instances supports running containers in Azure Cloud without the need to manage servers.
  • Batch performs the cloud-scale job including the scaling of virtual machines, staging data and executing compute pipelines, autoscaling the work in the queue.
  • Service Fabtic allows developing microservices and orchestrating containers on Windows or Linux.
  • Container registry is used for storing and managing container images across all types of Azure deployments.
  • Azure Red Hat OpenShift allows running fully managed OpenShift clusters, jointly operated with Red Hat.

More details about all these container-related services provided by Azure are on their official website.

Why to choose Azure for Kubernetes

Azure containers are a preferable solution if you are used to work with Windows DevOpsing and Azure Cloud.

Google services for Kubernetes

As Kubernetes was founded by Google, Google Cloud has the most developed set of Kubernetes solutions.

Google Kubernetes Engine (GKE) is a secured Kubernetes service with four-way auto scaling and multi-cluster support. With GKE, you can start quickly with single-click clusters and then enjoy high-availability control planes of multi-zonal and regional clusters. GKE eliminates operational overhead with auto-repair, auto-upgrade, and release channels. High level of security is achieved by data encryption and vulnerability scanning of container images. GKE is integrated with Cloud Monitoring and provides infrastructure, application, and Kubernetes-specific views.

Knative is a Kubernetes-based platform that was created by Google in cooperation with over 50 different companies. With Knative, you can build, deploy, and manage modern serverless workloads. The service is famous for its features like scaling-to-zero, autoscaling, in-cluster builds and eventing framework for cloud-native applications on Kubernetes. Knative combines and codifies best practices shared by leading Kubernetes-based frameworks so that a developer could concentrate on writing code instead of toiling against the tricky places when they build, deploy, and manage their applications. 

Knative supports popular development patterns such as GitOps, DockerOps, ManualOps, as well as tools and frameworks such as Django, Ruby on Rails, Spring, etc. The service is designed to plug easily into existing build and CI/CD toolchains. Knative has an open API and runtime environment that allows running your workloads where you choose – on Google Cloud with full management, on Athos on GKE, or on your own Kubernetes cluster.

Enterprise customers of Google Cloud enjoy Kubernetes-based enterprise-ready containerized solutions which are portable, include pre-built deployment templates, and support simplified licensing and consolidated billing.

Besides GKE and Knative, Google Cloud provides other container-related services including Artifactory Registry, Cloud Build, Cloud Run, Container Registry, Container Security, Deep Learning Containers, etc. You can find more information on the Google Cloud official website.

Why to choose Google Cloud for Kubernetes

Google Cloud has created Kubernetes, so it will make sure that you get the latest versions and technologies.

Managing Kubernetes in cross-cloud environments

Kubernetes solutions implemented by cloud providers are usually optimized for the cloud resources supplied by this provider and are integrated with their other native tools and services. However, most cloud users prefer cross-cloud environments and locate their resources in different clouds.

That is why there is a growing need for Kubernetes-related software offerings that allow working with multi-cloud infrastructures.

Anthos

Anthos is a platform for managing containers in a cross-cloud manner (created by Google). Anthos allows building cloud-native applications anywhere to promote agility and cost savings.

With Anthos, you can run Kubernetes clusters in both cloud and on-premises environments with built-in visibility: Anthos Configuration management analyses changes and rolls them out to all Kubernetes clusters in order to achieve the desired state.

Anthos is a fully managed service mesh. One of its main features is the defense-in-depth security strategy with the comprehensive set of security controls implemented into each stage of the application life cycle, from development to build to run.

You can find details about Anthos on the Google Cloud official website.

Kubernetes as a Service from Maestro

Maestro adheres to the notion of Open PaaS, which goal is to equip users with the latest versions of the containerization tools provide and supported by the community. Kubernetes as a Service is one of Maestro services that equips cloud users with a quick, easy, and well-tested Open PaaS solution for private regions and public clouds.

Kubernetes as a Service is deployed by means of Kubespray, a collection of the Ansible playbooks, inventories, provisioning tools, and domain knowledge intended to help managing and configuring Kubernetes clusters. For an ordinary cloud user it means that the Maestro team prepares the necessary environment and generates the Ansible inventory that will be exploited for activating the service via Kubespray thus giving an easy start to tenants and tenant members who want you work with Kubernetes.

Kubernetes as a Service is used for managing, scheduling, and running containerized applications created by different container engines and located in any private regions and public clouds. Maestro provides the installation of the latest Kubernetes version supported by the community.

In Maestro, the default Kubernetes cluster includes two instances which function as master nodes and one worker node. Master nodes manage the workload and provide communication within the cluster as well as contains information about state of the cluster. Two and more master nodes enable high service performance and ensure faultless operation. Worker node subordinates to the master nodes and serves as runner. Application containers can be run on both master and worker nodes.

Kubernetes as a Service has a web UI which is automatically available as soon as the service is activated in the cluster. It is accessible using kubectl proxy via a URL over the https connection.

You can read more about Kubernetes as a Service here.

To sum up

Kubernetes is a popular containerization solution that allows easy deployment and effective management of your applications. All cloud providers provide different tools and services for working with Kubernetes that are optimized for their cloud resources and integrated with their security, monitoring, and management products.

Maestro is one of the tools that can be used for running and managing Kubernetes containers in cross-cloud infrastructures. Infrastructures created by means of Kubernetes as a Service are not confined to Maestro and can also be used in your hybrid infrastructures and be managed by Anthos or other container management tools.

Comments

Popular posts from this blog

Maestro Analytics: Essentials at the Fingertips

Maestro: Greeting the Green Dragon

Maestro Orchestrator: Product? SaaS? Framework!