Enabling Kubernetes PodPresets with kops - kubernetes

I've got a kubernetes cluster which was set up with kops with 1.5, and then upgraded to 1.6.2. I'm trying to use PodPresets. The docs state the following requirements:
You have enabled the api type settings.k8s.io/v1alpha1/podpreset
You have enabled the admission controller PodPreset
You have defined your pod presets
I'm seeing that for 1.6.x, the first is taken care of (how can I verify?). How can I apply the second? I can see that there are three kube-apiserver-* pods running in the cluster (I imagine it's for the 3 azs). I guess I can edit their yaml config from kubernetes dashboard and add PodPreset to the admission-control string. But is there a better way to achieve this?

You can list the API groups which are currently enabled in your cluster either with the api-versions kubectl command, or by sending a GET request to the /apis endpoint of your kube-apiserver:
$ curl localhost:8080/apis
{
"paths": [
"/api",
"/api/v1",
"...",
"/apis/settings.k8s.io",
"/apis/settings.k8s.io/v1alpha1",
"...",
}
Note: The settings.k8s.io/v1alpha1 API is enabled by default on Kubernetes v1.6 and v1.7 but will be disabled by default in v1.8.
You can use a kops ClusterSpec to customize the configuration of your Kubernetes components during the cluster provisioning, including the API servers.
This is described on the documentation page Using A Manifest to Manage kops Clusters, and the full spec for the KubeAPIServerConfig type is available in the kops GoDoc.
Example:
apiVersion: kops/v1
kind: Cluster
metadata:
name: k8s.example.com
spec:
kubeAPIServer:
AdmissionControl:
- NamespaceLifecycle
- LimitRanger
- PodPreset
To update an existing cluster, perform the following steps:
Get the full cluster configuration with
kops get cluster name --full
Copy the kubeAPIServer spec block from it.
Do not push back the full configuration. Instead, edit the cluster configuration with
kops edit cluster name
Paste the kubeAPIServer spec block, add the missing bits, and save.
Update the cluster resources with
kops update cluster nane
Perform a rolling update to apply the changes:
kops rolling-update name

Related

Installed prometheus-community / helm-charts but I can't get metrics on "default" namespace

I recently learned about helm and how easy it is to deploy the whole prometheus stack for monitoring a Kubernetes cluster, so I decided to try it out on a staging cluster at my work.
I started by creating a dedicates namespace on the cluster for monitoring with:
kubectl create namespace monitoring
Then, with helm, I added the prometheus-community repo with:
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
Next, I installed the chart with a prometheus release name:
helm install prometheus prometheus-community/kube-prometheus-stack -n monitoring
At this time I didn't pass any custom configuration because I'm still trying it out.
After the install is finished, it all looks good. I can access the prometheus dashboard with:
kubectl port-forward prometheus-prometheus-kube-prometheus-prometheus-0 9090 -n monitoring
There, I see a bunch of pre-defined alerts and rules that are monitoring but the problem is that I don't quite understand how to create new rules to check the pods in the default namespace, where I actually have my services deployed.
I am looking at http://localhost:9090/graph to play around with the queries and I can't seem to use any that will give me metrics on my pods in the default namespace.
I am a bit overwhelmed with the amount of information so I would like to know what did I miss or what am I doing wrong here?
The Prometheus Operator includes several Custom Resource Definitions (CRDs) including ServiceMonitor (and PodMonitor). ServiceMonitor's are used to define services to the Operator to be monitored.
I'm familiar with the Operator although not the Helm deployment but I suspect you'll want to create ServiceMonitors to generate metrics for your apps in any (including default) namespace.
See: https://github.com/prometheus-operator/prometheus-operator#customresourcedefinitions
ServiceMonitors and PodMonitors are CRDs for Prometheus Operator. When working directly with Prometheus helm chart (without operator), you need have to configure your targets directly in values.yaml by editing the scrape_configs section.
It is more complex to do it, so take a deep breath and start by reading this: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#scrape_config

KOPS reload ssh access key to cluster

I want to restart my Kubernetes access ssh key using commands from this website:
https://github.com/kubernetes/kops/blob/master/docs/security.md#ssh-access
so those:
kops delete secret --name <clustername> sshpublickey admin
kops create secret --name <clustername> sshpublickey admin -i ~/.ssh/newkey.pub
kops update cluster --yes
And when I type last command "kops update cluster --yes" I get that error:
completed cluster failed validation: spec.spec.kubeProxy.enabled: Forbidden: kube-router requires kubeProxy to be disabled
Does Anybody have any idea what can I change those secret key without disabling kubeProxy?
This problem comes from having set
spec:
networking:
kuberouter: {}
but not
spec:
kubeProxy:
enabled: false
in the cluster spec.
Export the config using kops get -o yaml > myspec.yaml, edit the config according to the error above. Then you can apply the spec using kops replace -f myspec.yaml.
It is considered a best practice to check the above yaml into version control to track any changes done to the cluster configuration.
Once the cluster spec has been amended, the new ssh key should work as well.
What version of kubernetes are you running? If you are running the latests one 1.18.xx the user its not admin but ubuntu.
One other thing that you could do is to first edit the cluster and set the spect of kubeproxy to enabled fist . Run kops update cluster and rolling update and then do the secret delete and creation.

How to disable istio-proxy sidecar access log for specific deployments in Kubernetes

I'm using istio-proxy sidecar with Kubernetes, sidecars are automatically added to the Kubernetes pods.
I want to turn off the access log for one single deployment (without disabling the sidecar).
is there an annotation to do that?
As I mentioned in comments
If you want to disable envoy’s access logging globally you can use istioctl/operator to do that.
There is istio documentation about that.
Remove, or set to "", the meshConfig.accessLogFile setting in your Istio install configuration.
There is istioctl command:
istioctl install --set meshConfig.accessLogFile=""
There is an example with istio operator:
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
profile: default
meshConfig:
accessLogFile: ""
If you want to disable it for a specific pod you can use use below command, there is envoy documentation about that.
curl -X POST http://localhost:15000/logging?level=off
As you're looking for a way to do that for deployment that trick with init container and above curl command might actually work.

Is the prometheus-to-sd required for GKE? Can I delete it?

A while back a GKE cluster got created which came with a daemonset of:
kubectl get daemonsets --all-namespaces
...
kube-system prometheus-to-sd 6 6 6 3 6 beta.kubernetes.io/os=linux 355d
Can I delete this daemonset without issue?
What is it being used for?
What functionality would I be losing without it?
TL;DR
Even if you delete it, it will be back.
A little bit more explanation
Citing explanation by user #Yasen what prometheus-to-sd is:
prometheus-to-sd is a simple component that can scrape metrics stored in prometheus text format from one or multiple components and push them to the Stackdriver. Main requirement: k8s cluster should run on GCE or GKE.
Github.com: Prometheus-to-sd
Assuming that the command deleting this daemonset will be:
$ kubectl delete daemonset prometheus-to-sd --namespace=kube-system
Executing this command will indeed delete the daemonset but it will be back after a while.
prometheus-to-sd daemonset is managed by Addon-Manager which will recreate deleted daemonset back to original state.
Below is the part of the prometheus-to-sd daemonset YAML definition which states that this daemonset is managed by addonmanager:
labels:
addonmanager.kubernetes.io/mode: Reconcile
You can read more about it by following: Github.com: Kubernetes: addon-manager
Deleting this daemonset is strictly connected to the monitoring/logging solution you are using with your GKE cluster. There are 2 options:
Stackdriver logging/monitoring
Legacy logging/monitoring
Stackdriver logging/monitoring
You need to completely disable logging and monitoring of your GKE cluster to delete this daemonset.
You can do it by following a path:
GCP -> Kubernetes Engine -> Cluster -> Edit -> Kubernetes Engine Monitoring -> Set to disabled.
Legacy logging/monitoring
If you are using a legacy solution which is available to GKE version 1.14, you need to disable the option of Legacy Stackdriver Monitoring by following the same path as above.
Let me know if you have any questions in that.
TL;DR - it's ok
Assuming your context, I suppose, it's ok to shutdown prometheus component of your cluster.
Except cases when reports, alerts and monitoring - are critical parts of your system.
Let dive in the sources of GCP
As per source code at GoogleCloudPlatform:
prometheus-to-sd is a simple component that can scrape metrics stored in prometheus text format from one or multiple components and push them to the Stackdriver. Main requirement: k8s cluster should run on GCE or GKE.
Prometheus
From their Prometheus Github Page:
The Prometheus monitoring system and time series database.
To get a picture what is it for - you can read awesome guide on Prometheus: Prometheus Monitoring : The Definitive Guide in 2019 – devconnected
Also, there are hundreds of videos on their Youtube channel Prometheus Monitoring
Your questions
So, answering to your questions:
Can I delete this daemonset without issue?
It depends. As I said, you can. Except cases when reports, alerts and monitoring - are critical parts of your system.
What is it being used for
It's a TSDB for monitoring
what functionality would I be loosing without it?
metrics
→ therefore dashboards
→ therefore alerting

How to override kops' default kube-dns add-on spec

We're running kops->terraform k8s clusters in AWS, and because of the number of k8s jobs we have in flight at once, the kube-dns container is getting OOMkilled, forcing us to raise the memory limit.
As for automating this so it both survives cluster upgrades and is automatically done for new clusters created from the same template, I don't see a way to override the canned kops spec. The only options I can see involve some update (kubectl edit deployment kube-dns, delete the kube-dns add-on deployment and use our own, overwrite the spec uploaded to the kops state store, etc.) that probably needs to be done each time after using kops to update the cluster.
I've checked the docs and even the spec source and no other options stand out. Is there a way to pass a custom kube-dns deployment spec to kops? Or tell it not to install the kube-dns add-on?