EKS cannot create persistent volume - kubernetes

I am deploying prometheus which needs persistent volume(i have also tried with other statefulset), but persistent volume is not created and persistent volume clam shows the flowing error after kubectl describe -n {namespace} {pvc-name}.
Type: Warning
Reason: ProvisioningFailed
From: persistentvolume-controller
Message: (combined from similar events): Failed to provision volume with StorageClass "gp2": error querying for all zones: error listing AWS instances: "UnauthorizedOperation: You are not authorized to perform this operation.\n\tstatus code: 403, request id: d502ce90-8af0-4292-b872-ca04900d41dc"
kubectl get sc
NAME PROVISIONER AGE
gp2 (default) kubernetes.io/aws-ebs 7d17h
kubectl describe sc gp2
Name: gp2
IsDefaultClass: Yes
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"name":"gp2"},"parameters":{"fsType":"ext4","type":"gp2"},"provisioner":"kubernetes.io/aws-ebs","volumeBindingMode":"WaitForFirstConsumer"}
,storageclass.kubernetes.io/is-default-class=true
Provisioner: kubernetes.io/aws-ebs
Parameters: fsType=ext4,type=gp2
AllowVolumeExpansion: <unset>
MountOptions: <none>
ReclaimPolicy: Delete
VolumeBindingMode: WaitForFirstConsumer
Events: <none>
K8s versions(aws eks):
Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.2", GitCommit:"c97fe5036ef3df2967d086711e6c0c405941e14b", GitTreeState:"clean", BuildDate:"2019-10-15T23:41:55Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"14+", GitVersion:"v1.14.8-eks-b7174d", GitCommit:"b7174db5ee0e30c94a0b9899c20ac980c0850fc8", GitTreeState:"clean", BuildDate:"2019-10-18T17:56:01Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"linux/amd64"}
helm version
version.BuildInfo{Version:"v3.0.0", GitCommit:"e29ce2a54e96cd02ccfce88bee4f58bb6e2a28b6", GitTreeState:"clean", GoVersion:"go1.13.4"}

I solved this problem by adding AmazonEKSClusterPolicy and AmazonEKSServicePolicy to the eks cluster role.

Check your IAM policies, and ensure that you are using the correct access keys.
Also check IAM role has permission to provision dynamic storage

Related

Manually trigger kubernetes cronjob fails

My k8s version:
kubectl version
Client Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.2", GitCommit:"8b5a19147530eaac9476b0ab82980b4088bbc1b2", GitTreeState:"clean", BuildDate:"2021-09-15T21:38:50Z", GoVersion:"go1.16.8", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"20+", GitVersion:"v1.20.7-eks-d88609", GitCommit:"d886092805d5cc3a47ed5cf0c43de38ce442dfcb", GitTreeState:"clean", BuildDate:"2021-07-31T00:29:12Z", GoVersion:"go1.15.12", Compiler:"gc", Platform:"linux/amd64"}
WARNING: version difference between client (1.22) and server (1.20) exceeds the supported minor version skew of +/-1
My cronjob file:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
labels:
app: git-updater
name: git-updater
namespace: my-ns
spec:
concurrencyPolicy: Replace
failedJobsHistoryLimit: 2
jobTemplate:
spec:
template:
metadata:
labels:
app: git-updater
name: git-updater
spec:
containers:
- args:
- sh
- -c
- apk add --no-cache git openssh-client && cd /repos && ls -d */*.git/
| while read dir; do cd /repos/$dir; git gc; git fetch --prune; done
image: alpine
name: updater
volumeMounts:
- mountPath: test
name: test
- mountPath: test
name: test
restartPolicy: Never
volumes:
- persistentVolumeClaim:
claimName: pvc
name: repos
schedule: '#daily'
successfulJobsHistoryLimit: 4
...
When I create the job from file, all goes well:
kubectl -n my-ns create -f git-updater.yaml
cronjob.batch/git-updater created
But I would like to trigger it manually just for testing purposes, so I do:
kubectl -n my-ns create job --from=cronjob/git-updater test-job
error: unknown object type *v1beta1.CronJob
Which is strange because I have just been able to create it from file.
In another similar post it was suggested to switch from ApiVersion v1beta1 to v1.... but when I do so then I get:
kubectl -n my-ns create -f git-updater.yaml
error: unable to recognize "git-updater.yaml": no matches for kind "CronJob" in version "batch/v1"
I am a little stuck here, how can I test my newly and successfully created CronJob?
That's your v1.20 cluster version, that doing that.
Short answer is you should upgrade cluster to 1.21, where cronjobs works more or less stable.
Check
no matches for kind "CronJob" in version "batch/v1" , especially comment
One thing is the api-version, another one is in which version the
resource you are creating is available. By version 1.19.x you do have
batch/v1 as an available api-resource, but you don't have the resource
CronJob under it.
Kubernetes Create Job from Cronjob not working
I have an 1.20.9 GKE cluster and face the same issue as you
$kubectl version
Client Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.2", GitCommit:"8b5a19147530eaac9476b0ab82980b4088bbc1b2", GitTreeState:"clean", BuildDate:"2021-09-15T21:38:50Z", GoVersion:"go1.16.8", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"20+", GitVersion:"v1.20.9-gke.1001", GitCommit:"1fe18c314ed577f6047d2712a9d1c8e498e22381", GitTreeState:"clean", BuildDate:"2021-08-23T23:06:28Z", GoVersion:"go1.15.13b5", Compiler:"gc", Platform:"linux/amd64"}
WARNING: version difference between client (1.22) and server (1.20) exceeds the supported minor version skew of +/-1
cronjob.batch/git-updater created
$kubectl -n my-ns create job --from=cronjob/git-updater test-job
error: unknown object type *v1beta1.CronJob
At the same time, your cronjob yaml perfectly works with apiVersion: batch/v1 on 1.21 GKE one.
$kubectl version
Client Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.2", GitCommit:"8b5a19147530eaac9476b0ab82980b4088bbc1b2", GitTreeState:"clean", BuildDate:"2021-09-15T21:38:50Z", GoVersion:"go1.16.8", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.3-gke.2001", GitCommit:"77cdf52c2a4e6c1bbd8ae4cd75c864e9abf4c79e", GitTreeState:"clean", BuildDate:"2021-08-20T18:38:46Z", GoVersion:"go1.16.6b7", Compiler:"gc", Platform:"linux/amd64"}
$kubectl -n my-ns create -f cronjob.yaml
cronjob.batch/git-updater created
$ kubectl -n my-ns create job --from=cronjob/git-updater test-job
job.batch/test-job created
$kubectl describe job test-job -n my-ns
Name: test-job
Namespace: my-ns
Selector: controller-uid=ef8ab972-cff1-4889-ae11-60c67a374660
Labels: app=git-updater
controller-uid=ef8ab972-cff1-4889-ae11-60c67a374660
job-name=test-job
Annotations: cronjob.kubernetes.io/instantiate: manual
Parallelism: 1
Completions: 1
Start Time: Tue, 02 Nov 2021 17:54:38 +0000
Pods Statuses: 1 Running / 0 Succeeded / 0 Failed
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulCreate 16m job-controller Created pod: test-job-vcxx9

JOB_COMPLETION_INDEX environment variable is empty when submitting an Indexed Job to kubernetes

I've made my cluster using minukube.
as I know, Indexed-Job feature is added at kubernetes version 1.21.
but when I made my job, it looks like there is no $JOB_COMPLETION_INDEX environ variable.
here is my yaml
apiVersion: batch/v1
kind: Job
metadata:
name: my-job
spec:
parallelism: 2
completions: 2
completionMode: Indexed
template:
spec:
restartPolicy: Never
containers:
- name: 'my-container'
image: 'busybox'
command: [
"sh",
"-c",
"echo My Index is $JOB_COMPLETION_INDEX && sleep 3600",
]
$ job.batch/my-job created
but as I mentioned before, it looks like the job is NOT Indexed-Job
below is logs of my pods(controlled by my-job)
$ kubectl logs pod/my-job-wxhh6
My Index is
$ kubectl logs pod/my-job-nbxkr
My Index is
It seems that the $JOB_COMPLETION_INDEX environ variable is empty.
I'll skip it to make it brief, but when I directly accessed the container, there is also no $JOB_COMPLETION_INDEX.
and below is result of kubectl describe job.batch/my-job command
Name: my-job
Namespace: default
Selector: controller-uid=6e8bda0c-f1ee-47b9-95ec-87419b3dfaaf
Labels: controller-uid=6e8bda0c-f1ee-47b9-95ec-87419b3dfaaf
job-name=my-job
Annotations: <none>
Parallelism: 2
Completions: 2
Start Time: Sat, 28 Aug 2021 03:56:46 +0900
Pods Statuses: 2 Running / 0 Succeeded / 0 Failed
Pod Template:
Labels: controller-uid=6e8bda0c-f1ee-47b9-95ec-87419b3dfaaf
job-name=my-job
Containers:
my-container:
Image: busybox
Port: <none>
Host Port: <none>
Command:
sh
-c
echo My Index is $JOB_COMPLETION_INDEX && sleep 3600
Environment: <none>
Mounts: <none>
Volumes: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulCreate 100s job-controller Created pod: my-job-wxhh6
Normal SuccessfulCreate 100s job-controller Created pod: my-job-nbxkr
there is no annotation. as document, batch.kubernetes.io/job-completion-index annotation must be there.
My version is upper than kubernetes 1.21, where the Indexed-Job feature is introduced.
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.3", GitCommit:"ca643a4d1f7bfe34773c74f79527be4afd95bf39", GitTreeState:"clean", BuildDate:"2021-07-15T21:04:39Z", GoVersion:"go1.16.6", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.2", GitCommit:"092fbfbf53427de67cac1e9fa54aaa09a28371d7", GitTreeState:"clean", BuildDate:"2021-06-16T12:53:14Z", GoVersion:"go1.16.5", Compiler:"gc", Platform:"linux/amd64"}
$ minukube kubectl version
Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.2", GitCommit:"092fbfbf53427de67cac1e9fa54aaa09a28371d7", GitTreeState:"clean", BuildDate:"2021-06-16T12:59:11Z", GoVersion:"go1.16.5", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.2", GitCommit:"092fbfbf53427de67cac1e9fa54aaa09a28371d7", GitTreeState:"clean", BuildDate:"2021-06-16T12:53:14Z", GoVersion:"go1.16.5", Compiler:"gc", Platform:"linux/amd64"}
What should I do?
as of now(2021.08.29), the IndexedJob is an alpha feature, so I started minikube with feature-gates flag
minikube start --feature-gates=IndexedJob=true
and It works well
The documentation is incorrect; you need to be on the rapid release channel, at least version 1.22.

K8s Error from server (NotFound): deployments.apps "nginx" not found

Problem
Coursera Google Cloud Fundamentals: Getting Started with Kubernetes Engine
has the instructions to run and expose the pod. The demo video shows it is working.
However, it causes the error in my execution. How to fix?
kubectl run nginx --image=nginx:1.10.0
kubectl expose deployment nginx --type LoadBalancer --port 80
---
Error from server (NotFound): deployments.apps "nginx" not found
Environment
GCP k8s.
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.1", GitCommit:"632ed300f2c34f6d6d15ca4cef3d3c7073412212", GitTreeState:"clean", BuildDate:"2021-08-19T15:45:37Z", GoVersion:"go1.16.7", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"20+", GitVersion:"v1.20.8-gke.2100", GitCommit:"4cd085fda961821985d176d25b67445c1efb6ba1", GitTreeState:"clean", BuildDate:"2021-07-16T09:22:57Z", GoVersion:"go1.15.13b5", Compiler:"gc", Platform:"linux/amd64"}
WARNING: version difference between client (1.22) and server (1.20) exceeds the supported minor version skew of +/-1
Cause
kubectl run does not create deployment.
'Error from server (NotFound): deployments.extensions "nginx" not found' when exposing NodePort #31
Did you create a pod with "kubectl run"? If yes, that doesn't create a deployment with (at least) kubectl v1.18.2, so instead use "kubectl create deployment nginx --image=nginx:1.10.0"
Fix
Create a deployment as in Creating and exploring an nginx deployment from a YAML or run kubectl create deployment ....
$ kubectl create deployment nginx --image=nginx:1.12.0
deployment.apps/nginx created
$ kubectl expose deployment nginx --type LoadBalancer --port 80
service/nginx exposed
$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.83.240.1 <none> 443/TCP 44m
nginx LoadBalancer 10.83.246.84 35.225.127.227 80:30825/TCP 60s

kubernetes can not join workers centos 7

My issue is that I can not connect between our machines (master and slaves)
My connection command should be
kubeadm join xxx:xxx:xxx:xxx:6443 --token a72x22.ofmqdjyzi7ot4l70 --discovery-token-ca-cert-hash sha256:3cfd9ddb1e655ef2172c12d914e2bb001434cc4c8a756919a7a6a9f0603e3131
I have been execute
echo 1 >/proc/sys/net/bridge/bridge-nf-call-iptables
echo 1 >/proc/sys/net/ipv4/ip_forward
swapoff -a
and I got the error
[kubelet-start] Downloading configuration for the kubelet from the
"kubelet-config-1.15" ConfigMap in the kube-system namespace error
execution phase kubelet-start: configmaps "kubelet-config-1.15" is
forbidden: User "system:bootstrap:a61x22" can
not get resource "configmaps" in API group "" in the namespace
"kube-system"
master kubectl version
Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.0", GitCommit:"e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529", GitTreeState:"clean", BuildDate:"2019-06-19T16:40:16Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.3", GitCommit:"5e53fd6bc17c0dec8434817e69b04a25d8ae0ff0", GitTreeState:"clean", BuildDate:"2019-06-06T01:36:19Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
slaves kubectl version
Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.0", GitCommit:"e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529", GitTreeState:"clean", BuildDate:"2019-06-19T16:40:16Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
The connection to the server localhost:8080 was refused - did you specify the right host or port?
maybe my issue is connected to the host or port?
how can I solve this issue?
Check if configmaps "kubelet-config-1.15" exists with the command below.
kubectl -n kube-system get configmap kubelet-config-1.15
Maybe your master is at version 1.14 and your new node downloaded a kubelet version 1.15.
In that case your configmap didn't exists and you have a configmap kubelet-config-1.14.
Upgrade your master node to v 1.15 or install kubernetes v1.14 into your worker node.
You can see what version your nodes are with
kubectl get nodes
[root#master /]# k get nodes
NAME STATUS ROLES AGE VERSION
master Ready master 32d v1.14.0
node6 Ready 32d v1.14.2
nodo2 Ready 32d v1.14.2

GitLab Kubernetes integration error; configuration of Helm Tiller already exists

After connecting my Gitlab repo to my self-putup Kubernetes cluster via Operations > Kubernetes, I want to install Helm Tiller via the GUI; but I get:
Something went wrong while installing Helm Tiller
Kubernetes error: configmaps "values-content-configuration-helm" already exists
There are no pods running on the cluster and kubectl version returns:
Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.2", GitCommit:"bb9ffb1654d4a729bb4cec18ff088eacc153c239", GitTreeState:"clean", BuildDate:"2018-08-07T23:17:28Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.2", GitCommit:"bb9ffb1654d4a729bb4cec18ff088eacc153c239", GitTreeState:"clean", BuildDate:"2018-08-07T23:08:19Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
update
the output of kubectl get cm --all-namespaces:
NAMESPACE NAME DATA AGE
gitlab-managed-apps values-content-configuration-helm 3 7d
...
deleting this namespace solves the issue!
Find the gitlab-managed-apps namespace with kubectl get cm --all-namespaces:
NAMESPACE NAME DATA AGE
gitlab-managed-apps values-content-configuration-helm 3 7d
...
deleting this namespace solves the issue:
kubectl delete namespace gitlab-managed-apps
Thanks to Lev Kuznetsov.