How to set http2-max-streams-per-connection with Kops - kubernetes

Is it possible to set the --http2-max-streams-per-connection value for a cluster created by Kops.
I have an interesting situation where one of my nodes falls into a NotReady status whenever I deploy a helm chart to it and I feel like it might be connected to this setting.
The nodes are generally fine and run without any issues, but once I deploy my helm chart, the status of whatever node it gets deployed to changes after a few minutes to NotReady Which I find weird.
I've done a bit of reading and seen a number of similar issues pointing to the setting --http2-max-streams-per-connection but I'm not how to go about setting this.
Any ideas anyone?

You should be able to set this by adding the following to the cluster spec:
spec:
kubeAPIServer:
http2MaxStreamsPerConnection: <value>
See https://pkg.go.dev/k8s.io/kops/pkg/apis/kops#KubeAPIServerConfig and https://kops.sigs.k8s.io/cluster_spec/
That being said, I do not believe the reason for your NotReady nodes is due to that setting. You may want to join #kops-users on the kubernetes slack space and ask for help triaging that problem.

Related

AKS not deleting orphaned resources

After some time, I have problems with some of our clusters where auto-delete of orphaned resources stop working. So if I remove a deployment nor the replicaset or the pods are removed, or if I remove a replicaset, a new one is created but the previous pods are still there.
I can't even update some deployments because that will create a new replicaset+pods.
This is an actual problem as we are creating and removing some resources and relying on auto-child removal.
The thing is that, destroying and creating again a cluster makes it working perfectly and we weren't able to trace to something we did that caused the problem.
I tried to upgrade both master and agent nodes to a newer version and restarting kubelet in agent nodes but that doesn't solve anything.
Could anyone knows where could be the problem or which component is in charge of the cascade deletion of orphan resources?
Does this happen to someone else? It happend to us already in 3 different clusters with different Kubernetes version.
I have tested it creating the test deployment in K8s documentation, and then delete it:
kubectl apply -f https://k8s.io/examples/application/deployment.yaml
kubectl delete deployments.apps nginx-deployment
But the pods are still there.
Thanks in advance
The problem was caused by a faulty CRD / Admission Webhook. It could seem strange, but a wrong CRD or a faulty pod acting as webhook will make kube-controller-manager fail for all resources (at least in AKS). After removing the CRD's and the faulty webhook it started to work again. (The reason why the webhook was failing is another different thing)

how to take a problematic pod offline to troubleshoot

HI I know there's a way i can pull out a problematic node out of loadbalancer to troubleshoot. But how can i pull a pod out of service to troubleshoot. What tools or command can do it ?
Change its labels so they no longer matches the selector: in the Service; we used to do that all the time. You can even put it back into rotation if you want to test a hypothesis. I don't recall exactly how quickly it takes effect, but I would guess "real quick" is a good approximation. :-)
## for example:
$ kubectl label pod $the_pod -app.kubernetes.io/name
## or, change it to non-matching
$ kubectl label pod $the_pod app.kubernetes.io/name=i-am-debugging-this-pod
As mentioned in Oreilly's "Kubernetes recipes: Maintenance and troubleshooting" page here
Removing a Pod from a Service
Problem
You have a well-defined service (see not available) backed by several
pods. But one of the pods is misbehaving, and you would like to take
it out of the list of endpoints to examine it at a later time.
Solution
Relabel the pod using the --overwrite option—this will allow you to
change the value of the run label on the pod. By overwriting this
label, you can ensure that it will not be selected by the service
selector (not available) and will be removed from the list of
endpoints. At the same time, the replica set watching over your pods
will see that a pod has disappeared and will start a new replica.
To see this in action, start with a straightforward deployment
generated with kubectl run (see not available):
For commands, check the recipes page mentioned above. There is also a section talking about "Debugging Pods" which will be helpful

One Traefik Pod in Kubernetes fails with error: 'command traefik error: field not found, node: redirect'

I'm running Traefik on a Kubernetes cluster to manage Ingress, which has been running ok for a long time.
I recently implemented Cluster-Autoscaling, which works fine except that on one Node (newly created by the Autoscaler) Traefik won't start. It sits in CrashLoopBackoff, and when I log the Pod I get: [date] [time] command traefik error: field not found, node: redirect.
Google found no relevant results, and the error itself is not very descriptive, so I'm not sure where to look.
My best guess is that it has something to do with the RedirectRegex Middleware configured in Traefik's config file:
[entryPoints.http.redirect]
regex = "^http://(.+)(:80)?/(.*)"
replacement = "https://$1/$3"
Traefik actually works still - I can still access all of my apps from their urls in my browser, even those which are on the node with the dead Traefik Pod.
The other Traefik Pods on other Nodes still run happily, and the Nodes are (at least in theory) identical.
After further googling, I found this on Reddit. Turns out Traefik updated a few days ago to v2.0, which is not backwards compatible.
Only this pod had the issue, because it was the only one for which a new (v2.0) image was pulled (being the only recently created Node).
I reverted to v1.7 until I have time to fix it properly. Had update the Daemonset to use v1.7, then kill the Pod so it could be recreated from the old image.
The devs have a Migration Guide that looks like it may help.
"redirect" is gone but now there is "RedirectScheme" and "RedirectRegex" as a new concept of "Middlewares".
It looks like they are moving to a pipeline approach, so you can define a chain of "middlewares" to apply to an "entrypoint" to decide how to direct it and what to add/remove/modify on packets in that chain. "backends" are now "providers", and they have a clearer, modular concept of configuration. It looks like it will offer better organization than earlier versions.

Heapster status stuck in Container Creating or Pending status

I am new to Kubernetes and started working with it from past one month.
When creating the setup of cluster, sometimes I see that Heapster will be stuck in Container Creating or Pending status. After this happens the only way have found here is to re-install everything from the scratch which has solved our problem. Later if I run the Heapster it would run without any problem. But I think this is not the optimal solution every time. So please help out in solving the same issue when it occurs again.
Heapster image is pulled from the github for our use. Right now the cluster is running fine, So could not send the screenshot of the heapster failing with it's status by staying in Container creating or Pending status.
Suggest any alternative for the problem to be solved if it occurs again.
Thanks in advance for your time.
A pod stuck in pending state can mean more than one thing. Next time it happens you should do 'kubectl get pods' and then 'kubectl describe pod '. However, since it works sometimes the most likely cause is that the cluster doesn't have enough resources on any of its nodes to schedule the pod. If the cluster is low on remaining resources you should get an indication of this by 'kubectl top nodes' and by 'kubectl describe nodes'. (Or with gke, if you are on google cloud, you often get a low resource warning in the web UI console.)
(Or if in Azure then be wary of https://github.com/Azure/ACS/issues/29 )

pods keep creating themselves even I deleted all deployments

I am running k8s on aws, and I updated the deployment of nginx - which normally, it works fine-, but after this time, the nginx deployment won't show up in "kubectl get deployments".
I want to kill all the pods related to nginx, but they keep reproduce themselves. I deleted all deployments "kubectl delete --all deployments", other pods just got terminated, but not nginx.
I have no idea where I can stop the pods recreating.
any idea where to start ?
check the deployment, replication controller and replica set and remove them.
kubectl get deploy,rc,rs
In modern kubernetes, there is also an annotation kubernetes.io/created-by on the Pod showing its "owner", as seen here, but I can't lay my hands on the documentation link right now. However, I found a pastebin containing a concrete example of the contents of the annotation