Vous êtes sur la page 1sur 7

KUBERNETES

EVENT
SIMULATOR
Design Document

Aneesh

Citrix
Table of Contents
INTRODUCTION ...............................................................................................................................................2
KUBERNETES ARCHITECTURE ..................................................................................................................................... 2
MASTER SERVER COMPONENTS ................................................................................................................................. 3
Etcd ............................................................................................................................................................... 3
kube-apiserver ............................................................................................................................................... 3
kube-controller-manager .............................................................................................................................. 4
kube-scheduler .............................................................................................................................................. 4
Citrix Ingress Controller ................................................................................................................................. 4
KUBERNETES EVENT SIMULATOR: ....................................................................................................................5
GOALS AND NON-GOALS .................................................................................................................................6
GOALS .................................................................................................................................................................. 6
NON GOALS........................................................................................................................................................... 6
MILESTONES ....................................................................................................................................................6
 MILESTONE 1: IMPLEMENT A MINIMAL WORKING KUBE-API SERVER WITH SUPPORT FOR IDENTIFICATION AND SIMULATION
OF BASIC LIFECYCLE EVENTS OF PODS, SERVICES. ............................................................................................................ 6
 MILESTONE 2: ADD SUPPORT FOR LIFECYCLE EVENTS FOR INGRESS. ....................................................................... 6
 MILESTONE 3: ADD SUPPORT FOR CUSTOM RESOURCES USING CRD. ..................................................................... 6
 MILESTONE 4: FULLY FUNCTIONAL KUBE-API SERVER THAT TALKS TO OUR INGRESS CONTROLLER. ................................. 6
REFERENCE: .....................................................................................................................................................6

1
Introduction

Kubernetes, at its basic level, is a system for running and coordinating containerized
applications across a cluster of machines. It is a platform designed to completely manage
the life cycle of containerized applications and services using methods that provide
predictability, scalability, and high availability.

Kubernetes Architecture

To understand how Kubernetes is able to provide these capabilities, it is helpful to get a


sense of how it is designed and organized at a high level. Kubernetes can be visualized as a
system built in layers, with each higher layer abstracting the complexity found in the lower
levels.

At its base, Kubernetes brings together individual physical or virtual machines into a cluster
using a shared network to communicate between each server. This cluster is the physical
platform where all Kubernetes components, capabilities, and workloads are configured.
The machines in the cluster are each given a role within the Kubernetes ecosystem. One
server (or a small group in highly available deployments) functions as the master server.
This server acts as a gateway and brain for the cluster by exposing an API for users and
clients, health checking other servers, deciding how best to split up and assign work (known
as "scheduling"), and orchestrating communication between other components. The master
server acts as the primary point of contact with the cluster and is responsible for most of
the centralized logic Kubernetes provides.

The other machines in the cluster are designated as nodes: servers responsible for accepting
and running workloads using local and external resources. To help with isolation,
management, and flexibility, Kubernetes runs applications and services in containers, so
each node needs to be equipped with a container runtime (like Docker or rkt). The node
receives work instructions from the master server and creates or destroys containers
accordingly, adjusting networking rules to route and forward traffic appropriately.

As mentioned above, the applications and services themselves are run on the cluster within
containers. The underlying components make sure that the desired state of the applications
matches the actual state of the cluster. Users interact with the cluster by communicating
with the main API server either directly or with clients and libraries. To start up an
application or service, a declarative plan is submitted in JSON or YAML defining what to
create and how it should be managed. The master server then takes the plan and figures out
how to run it on the infrastructure by examining the requirements and the current state of
the system. This group of user-defined applications running according to a specified plan
represents Kubernetes' final layer.

2
Master Server Components

As we described above, the master server acts as the primary control plane for Kubernetes
clusters. It serves as the main contact point for administrators and users, and also provides
many cluster-wide systems for the relatively unsophisticated worker nodes. Overall, the
components on the master server work together to accept user requests, determine the
best ways to schedule workload containers, authenticate clients and nodes, adjust cluster-
wide networking, and manage scaling and health checking responsibilities.

These components can be installed on a single machine or distributed across multiple


servers. We will take a look at each of the individual components associated with master
servers in this section.

Etcd

One of the fundamental components that Kubernetes needs to function is a globally


available configuration store. The etcd project, developed by the team at CoreOS, is a
lightweight, distributed key-value store that can be configured to span across multiple
nodes.

Kubernetes uses etcd to store configuration data that can be accessed by each of the nodes
in the cluster. This can be used for service discovery and can help components configure or
reconfigure themselves according to up-to-date information. It also helps maintain cluster
state with features like leader election and distributed locking. By providing a simple
HTTP/JSON API, the interface for setting or retrieving values is very straight forward.

Like most other components in the control plane, etcd can be configured on a single master
server or, in production scenarios, distributed among a number of machines. The only
requirement is that it be network accessible to each of the Kubernetes machines.

kube-apiserver

One of the most important master services is an API server. This is the main management
point of the entire cluster as it allows a user to configure Kubernetes' workloads and
organizational units. It is also responsible for making sure that the etcd store and the service
details of deployed containers are in agreement. It acts as the bridge between various
components to maintain cluster health and disseminate information and commands.

The API server implements a RESTful interface, which means that many different tools and
libraries can readily communicate with it. A client called kubectl is available as a default
method of interacting with the Kubernetes cluster from a local computer.

3
kube-controller-manager

The controller manager is a general service that has many responsibilities. Primarily, it
manages different controllers that regulate the state of the cluster, manage workload life
cycles, and perform routine tasks. For instance, a replication controller ensures that the
number of replicas (identical copies) defined for a pod matches the number currently
deployed on the cluster. The details of these operations are written to etcd, where the
controller manager watches for changes through the API server.

When a change is seen, the controller reads the new information and implements the
procedure that fulfills the desired state. This can involve scaling an application up or down,
adjusting endpoints, etc.

kube-scheduler
The process that actually assigns workloads to specific nodes in the cluster is the scheduler.
This service reads in a workload's operating requirements, analyzes the current
infrastructure environment, and places the work on an acceptable node or nodes.

The scheduler is responsible for tracking available capacity on each host to make sure that
workloads are not scheduled in excess of the available resources. The scheduler must know
the total capacity as well as the resources already allocated to existing workloads on each
server.

Citrix Ingress Controller

Kubernetes system constantly tries to move its current state to the desired state. The
worker units that guarantee the desired state are called controllers. A controller is a loop
that drives actual cluster state towards the desired cluster state.

The Ingress Resource is a collection of rules that allows incoming connections to reach our
Services. An Ingress Controller is a controller that watches the Kubernetes API server for
updates to the Ingress resource and reconfigures the Ingress load balancer accordingly.

4
Kubernetes Event Simulator:
The objective of the project is to implement a custom Kube-apiserver which simulates the
Kubernetes API, and provides the end user with the custom API’s for pods, services, secrets
and ingress along with the all the api’s generally available.

At the heart of Kubernetes is an API; in fact, everything in Kubernetes is treated as an API


object. You can translate the creation or deletion of pods, replica sets, services and other
cluster management actions into REST API calls.

To access your clusters via the API you must first have a cluster. We can use Minikube to
generate a cluster. There are many ways through which we can communicate to the cluster.
We can use curl, kubectl, or we could use the programmatic approach which is using client
library. Client-go is the most popular library used by the tools written in Go. There are
clients for many other languages out there (java, python, etc).

To write applications using the Kubernetes REST API, we do not need to implement the API
calls and request/response types ourselves. We can use a client library for the programming
language we are using.

Client libraries often handle common tasks such as authentication for you. Most client
libraries can discover and use the Kubernetes Service Account to authenticate if the API
client is running inside the Kubernetes cluster, or can understand the kubeconfig file format
to read the credentials and the API Server address.

Because we only have to simulate the events, we can make use of kubernetes/fake package.

The kubernetes package in Client-go, provides the NewForConfigmethod used to create a


new Clientset object, needed to connect to the Kubernetes cluster. The Clientset type
implements the Interface defined in the same package and all methods of this type belongs
to this interface. So, we can create another implementation of this interface to replace the
real one.
This is exactly what kubernetes/fake package does. This package provides a
newfake.Clientset type also implementing this interface. This package also provides
a NewSimpleClientset method that can be used to create a fake clientset that will be very
useful for in our applications.

5
Goals and Non-Goals

Goals
 We have to implement a custom API server which simulates all the functionalities.
 Ability to extend the kube api server through Custom Resource Definition.
Non Goals
 Actually implement the events. We only have to simulate the events and that can be
achieved through kubernetes/fake package as discussed above.

Milestones

 Milestone 1: Implement a minimal working kube-api server with support for


identification and simulation of basic lifecycle events of pods, services.
 MileStone 2: Add support for lifecycle events for ingress.
 Milestone 3: Add support for custom resources using CRD.
 Milestone 4: Fully functional kube-api server that talks to our ingress
controller.

Reference:

https://www.martin-helmich.de/en/blog/kubernetes-crd-client.html
https://itnext.io/testing-kubernetes-go-applications-f1f87502b6ef

Vous aimerez peut-être aussi