What's the difference between "volumeDevices" vs "volumeMounts" with k8s v1.13 - kubernetes

Running kubectl explain pod.spec.containers shows:
volumeDevices <[]Object>
volumeDevices is the list of block devices to be used by the container.
This is a beta feature.
volumeMounts <[]Object>
Pod volumes to mount into the container's filesystem. Cannot be updated.
Is there a relationship between these two containers properties?
Note that kubectl version shows:
Client Version: version.Info{Major:"1", Minor:"13",
GitVersion:"v1.13.0",
GitCommit:"ddf47ac13c1a9483ea035a79cd7c10005ff21a6d",
GitTreeState:"clean", BuildDate:"2018-12-03T21:04:45Z",
GoVersion:"go1.11.2", Compiler:"gc", Platform:"linux/amd64"}
Server
Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.0",
GitCommit:"ddf47ac13c1a9483ea035a79cd7c10005ff21a6d",
GitTreeState:"clean", BuildDate:"2018-12-03T20:56:12Z",
GoVersion:"go1.11.2", Compiler:"gc", Platform:"linux/amd64"}

'volumeDevices' is part of a new beta feature in 1.13 that allows a pod to access a raw block volume instead of a mounted filesystem volume. This is useful for some advanced applications like databases that may have their own filesystem format.
You can find the official documentation here although it does not seem to be updated for 1.13 yet.

Related

k8s job pod resource usage

For regular pods (in running state), we can check the actual resource utilisation (runtime) using kubectl top pod <pod_name> command.
However, for the job pods (execution is already complete), any way we can fetch how much resources were consumed by those pods?
Getting this info does help to better tune the resource allocation and also, whether we over/under provisioning the requests for the job pods.
Kuberenetes version
Client Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.3", GitCommit:"816c97ab8cff8a1c72eccca1026f7820e93e0d25", GitTreeState:"clean", BuildDate:"2022-01-25T21:25:17Z", GoVersion:"go1.17.6", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.13", GitCommit:"a43c0904d0de10f92aa3956c74489c45e6453d6e", GitTreeState:"clean", BuildDate:"2022-08-17T18:23:45Z", GoVersion:"go1.16.15", Compiler:"gc", Platform:"linux/amd64"}
If not any direct way, maybe a work-around to get this info.
There is no command that can show a job resource utilisation. The only option is using an external tool like the prometheus or make a sidecar container with logging resources usage.

Grafana showing k8s pods down for a minute

while using grafana for monitoring with Prometheus, we saw that sometimes grafana showed no pods for a service but when I checked in the cluster, all pods are running without any issue.
This issue is not continuous. Now I have to find out why grafana is alerting? But I don't know where to start.
Pls, ask if any info needed and pls show me the path, where I can start investigating.
Other info
This cluster is AWS EKS. Using prometheus:v2.22.1. Deployment of Prometheus & EKS cluster is done by Terraform.
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.4", GitCommit:"b695d79d4f967c403a96986f1750a35eb75e75f1", GitTreeState:"clean", BuildDate:"2021-11-17T15:48:33Z", GoVersion:"go1.16.10", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"18+", GitVersion:"v1.18.20-eks-8c49e2", GitCommit:"8c49e2efc3cfbb7788a58025e679787daed22018", GitTreeState:"clean", BuildDate:"2021-10-17T05:13:46Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}
WARNING: version difference between client (1.22) and server (1.18) exceeds the supported minor version skew of +/-1

"kubectl describe ingress ..." could not find the requested resource

I am trying to execute describe on ingress but does not work. Get command works fine but not describe. Is anything that I am doing wrong? I am running this against AKS.
usr#test:/mnt/c/Repos/user/charts/canary$ kubectl get ingress
NAME HOSTS ADDRESS PORTS AGE
ingress-route xyz.westus.cloudapp.azure.com 80 6h
usr#test:/mnt/c/Repos/user/charts/canary$ kubectl describe ingress ingress-route
Error from server (NotFound): the server could not find the requested resource
Version seems fine:
Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.3", ..}
Server Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.10"...}
This could be caused by the incompatibility between the Kubernetes cluster version and the kubectl version.
Run kubectl version to print the client and server versions for the current context, example:
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.1", GitCommit:"d647ddbd755faf07169599a625faf302ffc34458", GitTreeState:"clean", BuildDate:"2019-10-02T17:01:15Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"13+", GitVersion:"v1.13.10-gke.0", GitCommit:"569511c9540f78a94cc6a41d895c382d0946c11a", GitTreeState:"clean", BuildDate:"2019-08-21T23:28:44Z", GoVersion:"go1.11.13b4", Compiler:"gc", Platform:"linux/amd64"}
You might want to upgrade your cluster version or downgrade your kubectl version. See more details in https://github.com/kubernetes/kubectl/issues/675

The kubernetes "AVAILABLE" column indicates "0", but the former steps(in Kubernetes guide) are OK

I need to deploy some docker images, and manage them with the Kubernetes.
I followed the tutorial"Interactive Tutorial - Deploying an App"(https://kubernetes.io/docs/tutorials/kubernetes-basics/deploy-app/deploy-interactive/).
But after I typing the command kuberctl get deployments, in the result table, the deployment column shows 0 instead of 1, it's confusing me.
If there is anyone kindly guides me what's going wrong and what shall I do?
The OS is Ubuntu16.04;
The kuberctl version command shows the server and client version informations well.
The docker image is tagged already(a mysql:5.7 image).
devserver:~$ kubectl version    
Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.0", GitCommit:"ddf47ac13c1a9483ea035a79cd7c10005ff21a6d", GitTreeState:"clean", BuildDate:"2018-12-03T21:04:45Z", GoVersion:"go1.11.2", Compiler:"gc", Platform:"linux/amd64"}  
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.0", GitCommit:"fc32d2f3698e36b93322a3465f63a14e9f0eaead", GitTreeState:"clean", BuildDate:"2018-03-26T16:44:10Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
devserver:~$ kubectl get deployments
NAME  DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
ap-mysql   1    1    1       0     1
hello-node  1    1    1       0     1
I expect the answer about the phenomenon and the resolution. And I need to deploy my image on the minikube.
Katacoda uses hosted VM's so sometimes it may be slow to respond to the terminal input.
To verify if any deployment is present you may run kubectl get deployments --all-namespaces.To see what's going on with your deployment you can run kubectl describe DEPLOYMENT_NAME -n NAMESPACE.To inspect a pod you can do the same kubectl describe POD_NAME -n NAMESPACE.

Kubernetes HPA with custom metrics in 1.5

I have k8s v1.5 installad, I tried to following https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/ to implement HPA with custom metric.
In the page above, it said "--horizontal-pod-autoscaler-use-rest-clients flag on the controller manager set to true. " but while I set it, the controller manager cannot be started because this flag is not support.
So how can I find any guide for k8s v1.5?
Here is my k8s version information:
[bow#devvm13 ~]$ kubectl version
Client Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.1", GitCommit:"82450d03cb057bab0950214ef122b67c83fb11df", GitTreeState:"clean", BuildDate:"2016-12-14T00:57:05Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.0", GitCommit:"58b7c16a52c03e4a849874602be42ee71afdcab1", GitTreeState:"clean", BuildDate:"2016-12-12T23:31:15Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"}
--horizontal-pod-autoscaler-use-rest-clients is supported after 1.6. you can refer https://medium.com/#marko.luksa/kubernetes-autoscaling-based-on-custom-metrics-without-using-a-host-port-b783ed6241ac as an example.