Ceph octopus, setting autoscale mode from ceph.conf file - ceph

Since octopus, ceph clusters have the osd_pool_default_pg_autoscale_mode flag set to on as default.
So far I was able to turn it off from the cli using ceph config set global osd_pool_default_pg_autoscale_mode off as described here.
I would like to set it from the ceph.conf file:
[global]
...
osd pool default pg autoscale mode = off
pg autoscale mode = off
However ceph osd pool autoscale-status still shows newly created pools with autoscale turned on, even with pools created after restarting the osd's and mgr's daemons.
Any help would be welcome

[global]
# disable pg_autoscaler by default for new pools
osd_pool_default_pg_autoscale_mode = off
Putting this in both Monitor and OSDs ceph.conf should be works.

Related

Modify Kubernetes cluster criSocket setting

I have a Kubernetes lab environment for studying an online course.
I missed a step in the installation instructions and didn't change the criSocket setting.
How can I change this setting and keep the rest of the cluster configuration?
I don't want to regenerate default cluster config, as I did in when I installed Kuberentes:
kubeadm config print init-defaults | tee ClusterConfiguration.yaml
The cluster contains 1 control plane node and 3 worker nodes.
cri-socket is a setting for kubelet.
If you have already done some specific settings for the CRI that you want to use, I guess you can switch to the other CRI by editing /var/lib/kubelet/kubeadm-flags.env.
After stopping kubelet, add/modify --container-runtime-endpoint=... on that file and restart kubelet. Then kubelet will use a new CRI which is specified there.
This article may help you: https://dev.to/stack-labs/how-to-switch-container-runtime-in-a-kubernetes-cluster-1628

minikube start - howto modify KubeletConfiguration passed to kubeadm?

I would like to set the value KubeletConfiguration.cpuCFSQuota = false in the config.yaml passed to kubeadm when launching minikube to turn off CPU resource checking, but I have not managed to find the options to do this through the documentation here https://minikube.sigs.k8s.io/docs/handbook/config/ . The closest solution I have found is to use the option --extra-config=kubelet.cpu-cfs-quota=false but the --cpu-cfs-quota option for the kubelet has been deprecated and no longer has an effect.
Any ideas appreciated.
Environment:
Ubuntu 20.04
Minikube 1.17.1
Kubernetes 1.20.2
Driver docker (20.10.2)
Thanks,
Piers.
Using the --extra-config=kubelet. flag alongside minikube start is a good approach but you would also need to Set Kubelet parameters via a config file.
As you already noticed the --cpu-cfs-quota flag:
Enable CPU CFS quota enforcement for containers that specify CPU
limits (DEPRECATED: This parameter should be set via the config file
specified by the Kubelet's --config flag.
So you need to set that parameter by creating a kubelet config file:
The configuration file must be a JSON or YAML representation of the
parameters in this struct. Make sure the Kubelet has read permissions
on the file.
Here is an example of what this file might look like:
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
evictionHard:
memory.available: "200Mi"
Now you can use that config file to set cpuCFSQuota = false:
// cpuCFSQuota enables CPU CFS quota enforcement for containers that
// specify CPU limits.
// Dynamic Kubelet Config (beta): If dynamically updating this field, consider that
// disabling it may reduce node stability.
// Default: true
// +optional`
CPUCFSQuota *bool `json:"cpuCFSQuota,omitempty"
and than call minikube with --extra-config=kubelet.config=/path/to/config.yaml
Alternately, you can start your minikube without the --extra-config flag and than start the Kubelet with the --config flag set to the path of the Kubelet's config file. The Kubelet will then load its config from this file.
I know these are a few steps more than you expected but setting the kubelet parameters via a config file is the recommended approach because it simplifies node deployment and configuration management.

GKE - Upgrading cluster master after cluster creation completes

Once we increase load by using JMeter client than my deployed service is interrupted and on GCP/GKE console it says that -
Upgrading cluster master
The values shown below are going to change soon.
And my kubectl client throw this error during upgrade -
Unable to connect to the server: dial tcp 35.236.238.66:443: connectex: No connection could be made because the target machine actively refused it.
How can I stop this upgrade or prevent my service interruption ? If service will be intrupted than there is no benefit of this auto scaling. I am new to GKE, please let me know if I am missing any configuration or parameter here.
I am using this command to create my cluster-
gcloud container clusters create ajeet-gke --zone us-east4-b --node-locations us-east4-b --machine-type n1-standard-8 --num-nodes 1 --enable-autoscaling --min-nodes 4 --max-nodes 16
It is not upgrading k8s version. Because it works fine with smaller load but as I increase load than cluster starts upgrade of master. So it looks the master is resizing itself for more nodes. After upgrade I can see more nodes on GCP console. https://github.com/terraform-providers/terraform-provider-google/issues/3385
Below command says auto scaling is not enabled on instance group.
> gcloud compute instance-groups managed list
NAME AUTOSCALED LOCATION SCOPE ---
ajeet-gke-cluster- no us-east4-b zone ---
default-pool-4***0
Workaround
Sorry forget to update it here, I found a workaround to fix it - after splitting cluster creation command in to two steps cluster is auto scaling without restarting master node:
gcloud container clusters create ajeet-ggs --zone us-east4-b --node-locations us-east4-b --machine-type n1-standard-8 --num-nodes 1
gcloud container clusters update ajeet-ggs --enable-autoscaling --min-nodes 1 --max-nodes 10 --zone us-east4-b --node-pool default-pool
To prevent this you should always create your cluster with hardcoded cluster version to the last version available.
See the documentation: https://cloud.google.com/kubernetes-engine/docs/concepts/cluster-architecture#master
This means that Goolge is managing the master, meaning that if your master is not up to date it will be updated to be in the last version and allow google to limit the number of version currently managed. https://cloud.google.com/kubernetes-engine/docs/concepts/regional-clusters
Now why do you have an interruption of service during the update: because you are in zonal mode with only one master, to prevent this you should go in regional cluster mode with more than one master, allowing for clean rolling update.
The master won't resize the node, unless the autoscaling feature is enabled in it.
As mentioned in above answer, this is a feature at the node-pool level. By looking at description of the issue, it does seems like 'autoscaling' is enabled on your node-pool and eventually a GKE's cluster autoscaler automatically resizes clusters based on the demands of the workloads you want to run(ie when there are pods that are not able to be scheduled due to resource shortages such as CPU).
Additionaly, Kubernetes cluster autoscaling does not use the Managed Instance Group autoscaler. It runs a cluster-autoscaler controller on the Kubernetes master that uses Kubernetes-specific signals to scale your nodes.
It is therefore, highly recommended not use(or rely on the autoscaling status showed by MIG) Compute Engine's autoscaling feature on instance groups created by Kubernetes Engine.

How to change --horizontal-pod-autoscaler-sync-period field in kube-controller-manager to 5sec in gke

I am trying to set up an horizontal pod auto scaling in GKE. No proper documentation found to reduce the --horizontal-pod-autoscaler-sync-period to 5 sec using kube-controller-manager.
In the below link it says there is a possibility of changing the flags:
https://kubernetes.io/docs/reference/command-line-tools-reference/kube-controller-manager/
Is there any proper implementation steps to this?
You are not able do this on GKE, EKS and other managed clusters.
In order to change/add flags in kube-controller-manager - you should have access to your /etc/kubernetes/manifests/ directory on master node and be able to modify parameters in /etc/kubernetes/manifests/kube-controller-manager.yaml.
GKE, EKS and other clusters manages only by their providers without getting you permissions to have access to master nodes.
But you can create cluster with kubeadm init and configure/change as you like.
you can stop your minikube cluster and start it with your extra configs ...
minikube start --extra-config 'controller-manager.horizontal-pod-autoscaler-sync-period=5s'
for more details, you can go through https://minikube.sigs.k8s.io/docs/handbook/config/#modifying-kubernetes-defaults

K8s Global pod settings

I am debugging certain behavior from my application pods; i am launching on K8s cluster. In order to do that I am increasing logging by increasing verbosity of deployment by adding --v=N flag to Kubectl create deployment command.
my question is : how can i configure increased verbosity globally so all pods start reporting increased verbosity; including pods in kube-system name space.
i would prefer if it can be done without re-starting k8s cluster; but if there is no other way I can re-start.
thanks
Ankit
For your applications, there is nothing global as that is not something that has global meaning. You would have to add the appropriate config file settings, env vars, or cli options for whatever you are using.
For kubernetes itself, you can turn up the logging on the kubelet command line, but the defaults are already pretty verbose so I’m not sure you really want to do that unless you’re developing changes for kubernetes.