How to get active connections count of a Kubernetes POD - kubernetes

I am new to Kubernetes and for our application I need to find number of active connection of a pod. I analyzed a bit and got to know that we can get such data using Kubernetes addons like Istio. But I learnt that using such addons can lead to memory hit as the addon creates a kind of side car container for each pod to track such data.
Our application will be getting deployed finally in IBM cloud. I am not aware at the moment if there are any efficient addons which are specific to cloud service provider. I am still checking on it.
I could not get much from Kubernetes API. Would like to know if there is any other way to get active connection count of a Pod.

Related

Kubernetes Cluster - How to automatically generate documentation/Architecture of services

We started using Kubernetes, a few time ago, and now we have deployed a fair amount of services. It's becoming more and more difficult to know exactly what is deployed. I suppose many people are facing the same issue, so is there already a solution to handle this issue?
I'm talking of a solution that when connected to kubernetes (via kubectl for example) can generate a kind of map off the cluster.
In order to display one or many resources you need to use kubectl get command.
To show details of a specific resource or group of resources you can use kubectl describe command.
Please check the links I provided for more details and examples.
You may also want to use Web UI (Dashboard)
Dashboard is a web-based Kubernetes user interface. You can use
Dashboard to deploy containerized applications to a Kubernetes
cluster, troubleshoot your containerized application, and manage the
cluster resources. You can use Dashboard to get an overview of
applications running on your cluster, as well as for creating or
modifying individual Kubernetes resources (such as Deployments, Jobs,
DaemonSets, etc). For example, you can scale a Deployment, initiate a
rolling update, restart a pod or deploy new applications using a
deploy wizard.
Let me know if that helped.

Live monitoring of container, nodes and cluster

we are using k8s cluster for one of our application, cluster is owned by other team and we dont have full control over there… We are trying to find out metrics around resource utilization (CPU and memory), detail about running containers/pods/nodes etc. Need to find out how many parallel containers are running. Problem is they have exposed monitoring of cluster via Prometheus but with Prometheus we are not getting live data, it does not have info about running containers.
My query is , what is that API which is by default available in k8s cluster and can give all what we need. We dont want to read data form another client like Prometheus or anything else, we want to read metrics directly from cluster so that data is not stale. Any suggestions?
As you mentioned you will need metrics-server (or heapster) to get those information.
You can confirm if your metrics server is running kubectl top nodes/pods or just by checking if there is a heapster or metrics-server pod present in kube-system namespace.
Also the provided command would be able to show you the information you are looking for. I wont go into details as here you can find a lot of clues and ways of looking at cluster resource usage. You should probably take a look at cadvisor too which should be already present in the cluster. It exposes a web UI which exports live information about all the containers on the machine.
Other than that there are probably commercial ways of acheiving what you are looking for, for example SignalFx and other similar projects - but this will probably require the cluster administrator involvement.

Application monitoring in Azure Kubernetes cluster using new relic

Requirement - New Relic monitoring for an application running in pods as part of a kubernetes cluster.
I have installed Kube-state-metrics on my cluster and able to see kubernetes dashboard using newrelic insights.
Also, need to configure the Application monitoring for the same. Following https://blog.newrelic.com/2017/11/27/monitoring-application-performance-in-kubernetes/ for the same.
Have some questions for the same -
Can this be achieved using kube-state-metrics ?
Do I need to have separate yaml file for each pod containing license key?
Do I need to make changes in my application also or adding the information in spec will work?
Do I need to install Java agent in every pod? If yes, will it eat resources?
Somehow, Installation of application monitoring is becoming complex. Please explain the exact requirement of installation
You didn't mention your stack, you should follow instructions on their site for your language. Typically you just pull in their agent library and configure credentials to get started. You should not have a reason to tell your pods apart, so the agent credentials should be the same for all pods
Installing agents at infrastructure will let you have infrastructure data. So you'll get alerts if you're running out of memory/space/cpu and such. Infrastructure agent cannot possibly know about application data. If you want application performance data (apm) you need to install the agent at the application level too and you'll get data such as http request rates, error rates and response times if it's a webserver. You can also annotate current transaction with data which is all application specific. They have a bunch of client agents, see if there's one for your stack. For example all you need for a nodejs service is require('newrelic') at the top of your app and configuration

Kubernetes - Load balancing Web App access per connections

Long time I did not come here and I hope you're fine :)
So for now, i have the pleasure of working with kubernetes ! So let's start ! :)
[THE EXISTING]
I have an operationnal kubernetes cluster with which I work every day.it consists of several applications, one of which is of particular interest to us, which is the web management interface.
I currently own one master and four nodes in my cluster.
For my web application, pod contain 3 containers : web / mongo /filebeat, and for technical reasons, we decided to assign 5 users max for each web pod.
[WHAT I WANT]
I want to deploy a web pod on each nodes (web0,web1,web2,web3), what I can already do, and that each session (1 session = 1 user) is distributed as follows:
For now, all HTTP requests are processed by web0.
[QUESTIONS]
Am I forced to go through an external loadbalancer (haproxy)?
Can I use an internal loadbalancer, configuring a service?
Does anyone have experience on the implementation described above?
I thank in advance those who can help me in this process :)
This generally depends how and where you've deployed your Kubernetes infrastructure, but you can do this natively with a few options.
Firstly, you'll need to scale your web deployment. This is very simple to do:
kubectl scale --current-replicas=2 --replicas=3 deployment/web
If you're deployed into a cloud provider (such as AWS using kops, or GKE) you can use a service. Just specify the type as LoadBalancer. Services will spread the sessions for your users.
Another option is to use an Ingress. In order to do this, you'll need to use an Ingress Controller, such as the nginx-ingress-controller which is the most featureful and widely deployed.
Both of these options will automatically loadbalance your incoming application sessions, but they may not necessarily do it in the order you've described in your image, it'll be random across the available web deployments

What happens when the Kubernetes master fails?

I've been trying to figure out what happens when the Kubernetes master fails in a cluster that only has one master. Do web requests still get routed to pods if this happens, or does the entire system just shut down?
According to the OpenShift 3 documentation, which is built on top of Kubernetes, (https://docs.openshift.com/enterprise/3.2/architecture/infrastructure_components/kubernetes_infrastructure.html), if a master fails, nodes continue to function properly, but the system looses its ability to manage pods. Is this the same for vanilla Kubernetes?
In typical setups, the master nodes run both the API and etcd and are either largely or fully responsible for managing the underlying cloud infrastructure. When they are offline or degraded, the API will be offline or degraded.
In the event that they, etcd, or the API are fully offline, the cluster ceases to be a cluster and is instead a bunch of ad-hoc nodes for this period. The cluster will not be able to respond to node failures, create new resources, move pods to new nodes, etc. Until both:
Enough etcd instances are back online to form a quorum and make progress (for a visual explanation of how this works and what these terms mean, see this page).
At least one API server can service requests
In a partially degraded state, the API server may be able to respond to requests that only read data.
However, in any case, life for applications will continue as normal unless nodes are rebooted, or there is a dramatic failure of some sort during this time, because TCP/ UDP services, load balancers, DNS, the dashboard, etc. Should all continue to function for at least some time. Eventually, these things will all fail on different timescales. In single master setups or complete API failure, DNS failure will probably happen first as caches expire (on the order of minutes, though the exact timing is configurable, see the coredns cache plugin documentation). This is a good reason to consider a multi-master setup–DNS and service routing can continue to function indefinitely in a degraded state, even if etcd can no longer make progress.
There are actions that you could take as an operator which would accelerate failures, especially in a fully degraded state. For instance, rebooting a node would cause DNS queries and in fact probably all pod and service networking functionality until at least one master comes back online. Restarting DNS pods or kube-proxy would also be bad.
If you'd like to test this out yourself, I recommend kubeadm-dind-cluster, kind or, for more exotic setups, kubeadm on VMs or bare metal. Note: kubectl proxy will not work during API failure, as that routes traffic through the master(s).
Kubernetes cluster without a master is like a company running without a Manager.
No one else can instruct the workers(k8s components) other than the Manager(master node)(even you, the owner of the cluster, can only instruct the Manager)
Everything works as usual. Until the work is finished or something stopped them.(because the master node died after assigning the works)
As there is no Manager to re-assign any work for them, the workers will wait and wait until the Manager comes back.
The best practice is to assign multiple managers(master) to your cluster.
Although your data plane and running applications does not immediately starts breaking but there are several scenarios where cluster admins will wish they had multi-master setup. Key to understanding the impact would be understanding which all components talk to master for what and how and more importantly when will they fail if master fails.
Although your application pods running on data plane will not get immediately impacted but imagine a very possible scenario - your traffic suddenly surged and your horizontal pod autoscaler kicked in. The autoscaling would not work as Metrics Server collects resource metrics from Kubelets and exposes them in Kubernetes apiserver through Metrics API for use by Horizontal Pod Autoscaler and vertical pod autoscaler ( but your API server is already dead).If your pod memory shoots up because of high load then it will eventually lead to getting killed by k8s OOM killer. If any of the pods die, then since controller manager and scheduler talks to API Server to watch for current state of pods so they too will fail. In short a new pod will not be scheduled and your application may stop responding.
One thing to highlight is that Kubernetes system components communicate only with the API server. They don’t
talk to each other directly and so their functionality themselves could fail I guess. Unavailable master plane can mean several things - failure of any or all of these components - API server,etcd, kube scheduler, controller manager or worst the entire node had crashed.
If API server is unavailable - no one can use kubectl as generally all commands talk to API server ( meaning you cannot connect to cluster, cannot login into any pods to check anything on container file system. You will not be able to see application logs unless you have any additional centralized log management system).
If etcd database failed or got corrupted - your entire cluster state data is gone and the admins would want to restore it from backups as early as possible.
In short - a failed single master control plane although may not immediately impact traffic serving capability but cannot be relied on for serving your traffic.