This describes how one would install the agent on a regular gce instance:
https://cloud.google.com/monitoring/agent/install-agent
Previously the cluster ran on debian os nodes and we'd have the agent running to monitor cpu, disk space etc. now it's upgraded to kubernetes 1.4 and running on container-optimized os (https://cloud.google.com/container-optimized-os/docs/) the agent can't be installed manually.
I realise pods are monitored automatically, but that's only part of the picture.
I feel like I'm missing something here as this it'd be a big backward step for this not to be possible.
I've ran into the same thing several times. You have to switch back to the container-vm format in order to install the stackdriver agent.
gcloud container clusters upgrade --image-type=container_vm [CLUSTER_NAME]
That should flip it back. You can install the agent once the images flip. We're running 1.4.7 on the container-vm image and haven't seen any issues. Seems like overhead but not an actual step-back if that helps.
Related
I am seeing a continuous 8 to 15% CPU usage on Rancher related processes while there is not a single cluster being managed by it. Nor is any user interacting with. What explains this high CPU usage when idle? Also, there are several "rancher-agent" containers perpetually running and restarting. Which does not look right. There is no Kubernetes cluster running on this machine. This machine (unless Rancher is creating its own single node cluster for whatever reason).
I am using Rancher 2.3
docker stats:
docker ps:
htop:
I'm not sure I would call 15% "high", but Kubernetes has a lot of ongoing stuff even if it looks like the cluster is entirely quiet. Stuff like processing node heartbeats, etcd election traffic, controllers with time-based conditions which have to be processed. K3s probably streamlines that a bit, but 0% CPU usage is not a design goal even in the fork.
Rancher (2.3.x) does not do anything involving k3s. These pictures are not "just Rancher".
k3s is separately installed and running.
The agents further suggest that this node is added to a cluster (maybe the same Rancher running on it, maybe not).
It restarting all the time is not helping CPU usage, especially if it is registered to that local Rancher instance.
Also you're running a completely random commit from head instead of an actual release.
FWIW...In my case, I built the raspberry pi based Rancher/k3 lab as designed by Network Chuck on youtube. The VM on my linux host that runs Rancher will start off fairly quiet, then over the course of a couple of days the rancherd process will consistently hit near 100% cpu usage (I gave it 3 vcpu's) and stay there, even though I have no pods running on either the pi cluster or the local Rancher VM cluster. A reboot starts the process over, but within a few days its back to 100% cpu usage.
On writing this I just noticed that due to a DHCP issue, my original external ip for the local rancher cluster node got changed from 163 to 151 (I reserved it in pihole to 151, just never updated rancher config). Just fixed it in the Rancher gui, we'll see if that clears up some of the errors I saw in the logs and keeps the CPU usage normal on idle.
I am new to Kubernetes and, trying to setup the master and 2 node architecture using oracle Virtualbox.
OS: Ubuntu 16.04.6 LTS
Docker: 17.03.2-ce
Kubernetes
Client Version: v1.17.4
Server Version: v1.17.4
When I run the join command on the worker node, "kube-controller-manager" and "api-server manager" get disappeared and worker nodes are not getting joined (though join command executed successfully)
I have set the Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=cgroupfs" but still same error.
Please see below snapshot.
Thanks.
The link you have provided is no longer available. While learning and trying out Kubernetes for the first time I highly recommend using the official docs.
There you will find a detailed guide regarding Creating a single control-plane cluster with kubeadm. Note that:
To follow this guide, you need:
One or more machines running a deb/rpm-compatible Linux OS; for example: Ubuntu or CentOS.
2 GiB or more of RAM per machine–any less leaves little room for your apps.
At least 2 CPUs on the machine that you use as a control-plane node.
Full network connectivity among all machines in the cluster. You can use either a public or a private network.
You also need to use a version of kubeadm that can deploy the version
of Kubernetes that you want to use in your new cluster.
Kubernetes’ version and version skew support policy applies to kubeadm
as well as to Kubernetes overall. Check that policy to learn about
what versions of Kubernetes and kubeadm are supported. This page is
written for Kubernetes v1.18.
The kubeadm tool’s overall feature state is General Availability (GA).
Some sub-features are still under active development. The
implementation of creating the cluster may change slightly as the tool
evolves, but the overall implementation should be pretty stable.
If you encounter any issues, first try the troubleshooting steps.
Please let me know if that helped.
Let's say we're using EKS on AWS, would we need to manually manage the underlying Node's OS, installing patches and updates?
I would imagine that the pods and containers running inside the Node could be updated by simply version bumping the containers OS in your Dockerfile, but I'm unsure about how that would work for the Node's OS. Would the provider (AWS) in this case manage that?
Would be great to get an explanation for both Windows and Linux nodes. Are they different? Thank you!
Yes, you need to keep the nodes updated. But this has recently became easier with the new Bottlerocket - container optimized OS for nodes in EKS.
Updates to Bottlerocket can be automated using container orchestration services such as Amazon EKS, which lowers management overhead and reduces operational costs.
See also the blog post Bottlerocket – Open Source OS for Container Hosting
What i want do is deployment of multiple container application in...
In RHEL os
RedHat Supportable product (if possible)
In single node K8S cluster (Bare metal machine)
So I found several way but I concerned about..
minikube, minishift, OKD, CodeReady Container
First, they run in VM but what I want is run in HOST.
Second, their doc said they are not for production environment.
So, Is there any PaaS for single-node cluster as production environment?
Docker, Docker-compose
Deployment target OS should maybe RHEL8. I guess it is not good idea to use docker because RedHat product is moving away from docker. Even in RHEL8 repository, there is no docker rpm for el8 yet.
My question is
Is there any PaaS for single-node cluster as production environment?
If not exist, docker-compose is best?
It was already mentioned, you should not use single node setup in production environment.
You should not do that because, if your servers drops you have service offline. There is nothing to switch to, nothing that might continue the process that was being worked on.
If you still want to setup a single node Kubernetes cluster you can do that using kubeadm. I think this would be closest to production grade as you can get.
Other then that as an alternative you can play with Installing Kubernetes with Minikube or Install a local Kubernetes with MicroK8s.
It's up to you which one you will choose but you need to remember this should not be running as a production, this should be a lab or a test environment which if works as expected will be migrated into few node production grade cluster.
As for PaaS as a single node there is Dokku.
Docker powered mini-Heroku. The smallest PaaS implementation you've ever seen.
And if you would consider using a cloud for PaaS, you can choose from AWS Cloud9, Azure App Service or Google App Engine.
Single node cluster is not recommended for production applications. You need scalability, high availability, fault tolerance for production apps. You must have more than one node to have these features.
We are running out of disk space for containers running on our nodes. We are running k8s 1.0.1 in aws. We are also trying to do all our configuration in software instead of manually configuring things.
How do we increase the disk size of the nodes? Right now they have 8gb each as created by https://get.k8s.io | bash. It's fine if we have to create a new cluster and move our services/pods to it.
You should be able to do so setting the MINION_ROOT_DISK_SIZE environment variable before creating the cluster. However this option was just merged yesterday, so it may not be available yet unless you use the cluster/kube-up.sh script from HEAD of the repository.