Académique Documents
Professionnel Documents
Culture Documents
REFCARDZ GUIDES ZONES JOBS | AGILE BIG DATA CLOUD DATABASE DEVOPS INTEGRATION IOT JAVA MOBILE PERFORMANCE SECURITY WEB DEV
Let's be friends:
A Service Mesh for Kubernetes (Part 2): Pods Are Great Until They're Not
You can use linkerd as a service mesh with Kubernetes. In Part 2 of this series, see how and why you should install linkerd using DaemonSets.
Todays data climate is fast-paced and its not slowing down. Heres why your current integration solution is not enough. Brought to you in partnership with Liaison Technologies.
In our recent post about linkerd on Kubernetes, A Service Mesh for Kubernetes, Part I: Top-Line Service Metrics, observant readers noticed that linkerd was installed using DaemonSets rather than as a sidecar process. In this
post, well explain why (and how!) we do this.
As a service mesh, linkerd is designed to be run alongside application code, managing and monitoring inter-service communication, including performing service discovery, retries, load-balancing, and protocol upgrades.
At a first glance, this sounds like a perfect fit for a sidecar deployment in Kubernetes. After all, one of Kubernetess defining characteristics is its pod model. Deploying as a sidecar is conceptually simple, has clear failure
semantics, and weve spent a lot of time optimizing linkerd for this use case.
However, the sidecar model also has a downside: deploying per pod means that resource costs scale per pod. If your services are lightweight and you run many instances, like Monzo (who built an entire bank on top of linkerd and
Kubernetes), then the cost of using sidecars can be quite high.
We can reduce this resource cost by deploying linkerd per host rather than per pod. This allows resource consumption to scale per host, which is typically a significantly slower-growing metric than pod count. And, happily,
Kubernetes provides DaemonSets for this very purpose.
Unfortunately, for linkerd, deploying per host is a bit more complicated than just using DaemonSets. Read on for how we solve the service mesh problem with per-host deployments in Kubernetes.
In order to fully accomplish this, linkerd must be on the sending side and the receiving side of each request, proxying to and from local instances. E.g. for HTTP to HTTPS upgrades, linkerd must be able to both initiate and
terminate TLS. In a DaemonSet world, a request path through linkerd looks like the diagram below:
open in browser PRO version Are you a developer? Try out the HTML to PDF API pdfcrowd.com
As you can see, a request that starts in Pod A on Host 1 and is destined for Pod J on Host 2 must go through Pod As host-local linkerd instance, then to Host 2s linkerd instance, and finally to Pod J. This path introduces three
problems that linkerd must address:
What follows are the technical details on how we solve these three problems. If you just want to get linkerd working with Kubernetes DaemonSets, see the previous blog post!
In Kubernetes 1.4 and later, this information is directly available through the Downward API. Here is an except from hello-world.yml that shows how the node name can be passed into the application:
env: - name: NODE_NAME valueFrom: fieldRef: fieldPath: spec.nodeName - name: http_proxy value: $(NODE_NAME):4140 args: - "-addr=:7777" - "-text=Hello" - "-target=world"
(Note that this example sets the http_proxy environment variable to direct all HTTP calls through the host-local linkerd instance. While this approach works with most HTTP applications, non-HTTP applications will need to do
something different.)
In Kubernetes releases prior to 1.4, this information is still available, but in a less direct way. We provide a simple script that queries the Kubernetes API to get the host IP; the output of this script can be consumed by the
application, or used to build an http_proxy environment variable as in the example above.
Here is an excerpt from hello-world-legacy.yml that shows how the host IP can be passed into the application:
env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: NS valueFrom: fieldRef: fieldPath: metadata.namespace command: - "/bin/sh" - "-c" - "http_proxy=`hostIP.sh`:4140 helloworld -addr=:7777 -text=Hello -target=world"
Note that the hostIP.sh script requires that the pods name and namespace be set as environment variables in the pod.
routers: - protocol: http label: outgoing interpreter: kind: default transformers: - kind: io.l5d.k8s.daemonset namespace: default port: incoming service: l5d ...
routers: - protocol: http label: incoming interpreter: kind: default transformers: - kind: io.l5d.k8s.localnode ...
open in browser PRO version Are you a developer? Try out the HTML to PDF API pdfcrowd.com
Deploying linkerd as a Kubernetes DaemonSet gives us the best of both worldsit allows us to accomplish the full set of goals of a service mesh (such as transparent TLS, protocol upgrades, latency-aware load balancing, etc),
while scaling linkerd instances per host rather than per pod.
For a full, working example, see the previous blog post, or download our example app. And for help with this configuration or anything else about linkerd, feel free to drop into our very active Slack or post a topic on linkerd
discourse.
Is iPaaS solving the right problems? Not knowing the fundamental difference between iPaaS and iPaaS+ could cost you down the road. Brought to you in partnership with Liaison Technologies.
DOWNLOAD
Published at DZone with permission of Alex Leong, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.
Tutorial: Deploying the Dockerized 3scale API Gateway on Red Hat OpenShift
3Scale
open in browser PRO version Are you a developer? Try out the HTML to PDF API pdfcrowd.com
Achieving Enterprise Agility with Microservices and API Management
3Scale
The Definitive Guide to API Integrations: Explore API Integrations Below the Surface [eBook]
Cloud Elements
open in browser PRO version Are you a developer? Try out the HTML to PDF API pdfcrowd.com