Unable to install gitlab on Kubernetes - kubernetes

I am trying to install gitlab with helm on a kubernetes cluster which already have an ingress(cluster created by RKE). With gitlab, I want to deploy it into seperate namespace. For that, I ran the below command:
$ gitlab-config helm upgrade --install gitlab gitlab/gitlab \
--timeout 600 \
--set global.hosts.domain=asdsa.asdasd.net \
--set certmanager-issuer.email=sd#cloudssky.com \
--set global.edition=ce \
--namespace gitlab-ci \
--set gitlab.migrations.image.repository=registry.gitlab.com/gitlab-org/build/cng/gitlab-rails-ce \
--set gitlab.sidekiq.image.repository=registry.gitlab.com/gitlab-org/build/cng/gitlab-sidekiq-ce \
--set gitlab.unicorn.image.repository=registry.gitlab.com/gitlab-org/build/cng/gitlab-unicorn-ce \
--set gitlab.unicorn.workhorse.image=registry.gitlab.com/gitlab-org/build/cng/gitlab-workhorse-ce \
--set gitlab.task-runner.image.repository=registry.gitlab.com/gitlab-org/build/cng/gitlab-task-runner-ce
But the install fails while validating the domain with http01 test with cert-manager. For this, before running the above command, I've pointed my base domain to the existing Load Balancer in my cluster.
Is there something different which needs to be done for successful http01 validation?
Error:
Conditions:
Last Transition Time: 2018-11-18T15:22:00Z
Message: http-01 self check failed for domain "asdsa.asdasd.net"
Reason: ValidateError
Status: False
Type: Ready
More information:
The health checks for Load Balancer also keeps failing. So, even with using self-signed certificates, the installation is failing.
When tried to ssh into one of the nodes and check return status, here's what I saw:
$ curl -v localhost:32030/healthz
* Trying 127.0.0.1...
* Connected to localhost (127.0.0.1) port 32030 (#0)
> GET /healthz HTTP/1.1
> Host: localhost:32030
> User-Agent: curl/7.47.0
> Accept: */*
>
< HTTP/1.1 503 Service Unavailable
< Content-Type: application/json
< Date: Mon, 19 Nov 2018 13:38:49 GMT
< Content-Length: 114
<
{
"service": {
"namespace": "gitlab-ci",
"name": "gitlab-nginx-ingress-controller"
},
"localEndpoints": 0
* Connection #0 to host localhost left intact
}
And, when I checked ingress controller service, it was up and running:
gitlab-nginx-ingress-controller LoadBalancer 10.43.168.81 XXXXXXXXXXXXXX.us-east-2.elb.amazonaws.com 80:32006/TCP,443:31402/TCP,22:31858/TCP

The issue was resolved here - https://gitlab.com/charts/gitlab/issues/939
Basically, the solution as mentioned in the thread is not formally documented because it needs confirmation.

Related

Can't connect to proxy from within the pod [ Connection reset by peer ]

I get "Connection reset by peer" every time I try to use proxy from the Kubernetes pod.
Here is the log when from the curl:
>>>> curl -x http://5.188.62.223:15624 -L http://google.com -vvv
* Trying 5.188.62.223:15624...
* Connected to 5.188.62.223 (5.188.62.223) port 15624 (#0)
> GET http://google.com/ HTTP/1.1
> Host: google.com
> User-Agent: curl/7.79.1
> Accept: */*
> Proxy-Connection: Keep-Alive
>
* Recv failure: Connection reset by peer
* Closing connection 0
curl: (56) Recv failure: Connection reset by peer
Interesting fact, that I have no issues when I use same proxy on local computer, in docker and on remote host. Apparently smith within the cluster doesn't let me communicate with it.
Currently I use Azure hosted Kubernetes. But the same error happens on Digital Ocean as well.
I would be grateful for every leading clue of how I can bypass this restrictions, because Im out of ideas.
Server Info:
{
Major:"1",
Minor:"20",
GitVersion:"v1.20.7",
GitCommit:"ca90e422dfe1e209df2a7b07f3d75b92910432b5",
GitTreeState:"clean",
BuildDate:"2021-10-09T04:59:48Z",
GoVersion:"go1.15.12", Compiler:"gc",
Platform:"linux/amd64"
}
The yaml file I use in order to start the pod is just super basic. But originally I use airflow with Kubernetes executor, which actually spawn pretty similar basic pods:
apiVersion: v1
kind: Pod
metadata:
name: scrapeevent.test
spec:
affinity: {}
containers:
- command:
- /bin/sh
- -ec
- while :; do echo '.'; sleep 5 ; done
image: jaklimoff/mooncrops-opensea:latest
imagePullPolicy: Always
name: base
restartPolicy: Never

Istio mtls misconfiguration causes inconsistent behavior

I have deployed 2 istio enabled services on a GKE cluster.
istio version is 1.1.5 and GKE is on v1.15.9-gke.24
istio has been installed with global.mtls.enabled=true
serviceA communicates properly
serviceB apparently has TLS related issues.
I spin up a non-istio enabled deployment just for testing and exec into this test pod to curl these 2 service endpoints.
/ # curl -v serviceA
* Rebuilt URL to: serviceA/
* Trying 10.8.61.75...
* TCP_NODELAY set
* Connected to serviceA (10.8.61.75) port 80 (#0)
> GET / HTTP/1.1
> Host: serviceA
> User-Agent: curl/7.57.0
> Accept: */*
>
< HTTP/1.1 200 OK
< content-type: application/json
< content-length: 130
< server: istio-envoy
< date: Sat, 25 Apr 2020 09:45:32 GMT
< x-envoy-upstream-service-time: 2
< x-envoy-decorator-operation: serviceA.mynamespace.svc.cluster.local:80/*
<
{"application":"Flask-Docker Container"}
* Connection #0 to host serviceA left intact
/ # curl -v serviceB
* Rebuilt URL to: serviceB/
* Trying 10.8.58.228...
* TCP_NODELAY set
* Connected to serviceB (10.8.58.228) port 80 (#0)
> GET / HTTP/1.1
> Host: serviceB
> User-Agent: curl/7.57.0
> Accept: */*
>
* Recv failure: Connection reset by peer
* Closing connection 0
curl: (56) Recv failure: Connection reset by peer
Execing into the envoy proxy of the problematic service and turning trace level logging on, I see this error
serviceB-758bc87dcf-jzjgj istio-proxy [2020-04-24 13:15:21.180][29][debug][connection] [external/envoy/source/extensions/transport_sockets/tls/ssl_socket.cc:168] [C1484] handshake error: 1
serviceB-758bc87dcf-jzjgj istio-proxy [2020-04-24 13:15:21.180][29][debug][connection] [external/envoy/source/extensions/transport_sockets/tls/ssl_socket.cc:201] [C1484] TLS error: 268435612:SSL routines:OPENSSL_internal:HTTP_REQUEST
The envoy sidecars of both containers, display similar information when debugging their certificates.
I verify this by execing into both istio containers, cd-ing to /etc/certs/..data and running
openssl x509 -in root-cert.pem -noout -text
The two root-cert.pem are identical!
Since those 2 istio proxies have exactly the same tls configuration in terms of certs, why this cryptic SSL error on serviceB?
FWIW serviceB communicates with a non-istio enabled postgres service.
Could that be causing the issue?
curling the container of serviceB from within itself however, returns a healthy response.

"Unable to connect to the server: Forbidden" on kubectl and helm commands only when running with Ansible

I want to automate kubectl and helm commands using Ansible. The target machine is configured properly, so that both works on the cli in a manual shell (e.g. kubectl get nodes or helm list). But when trying to make any API calls like get the server version
- name: List charts
shell: kubectl version -v=8
It breaks with a Forbidden error. The verbose logging doesn't give me much more details:
fatal: [127.0.0.1]: FAILED! => changed=true
cmd: kubectl version -v=10
delta: '0:00:00.072452'
end: '2020-02-27 15:22:36.227928'
msg: non-zero return code
rc: 255
start: '2020-02-27 15:22:36.155476'
stderr: |-
I0227 15:22:36.224517 27321 loader.go:359] Config loaded from file /home/user/.kube/config
I0227 15:22:36.225211 27321 round_trippers.go:386] curl -k -v -XGET -H "Accept: application/json, */*" -H "User-Agent: kubectl/v1.11.3 (linux/amd64) kubernetes/a452946" 'https://k8smaster01:6443/version?timeout=32s'
I0227 15:22:36.225975 27321 round_trippers.go:405] GET https://k8smaster01:6443/version?timeout=32s in 0 milliseconds
I0227 15:22:36.225986 27321 round_trippers.go:411] Response Headers:
I0227 15:22:36.226062 27321 helpers.go:219] Connection error: Get https://k8smaster01:6443/version?timeout=32s: Forbidden
F0227 15:22:36.226080 27321 helpers.go:119] Unable to connect to the server: Forbidden
stderr_lines: <omitted>
stdout: 'Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.3", GitCommit:"a4529464e4629c21224b3d52edfe0ea91b072862", GitTreeState:"clean", BuildDate:"2018-09-09T18:02:47Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}'
stdout_lines: <omitted>
However, when sending a manual request to those API url like this
- name: Test master connection
shell: curl -k https://k8smaster01:6443/version?timeout=32s
It works:
stderr_lines: <omitted>
stdout: |-
{
"major": "1",
"minor": "11",
"gitVersion": "v1.11.3",
"gitCommit": "a4529464e4629c21224b3d52edfe0ea91b072862",
"gitTreeState": "clean",
"buildDate": "2018-09-09T17:53:03Z",
"goVersion": "go1.10.3",
"compiler": "gc",
"platform": "linux/amd64"
}
Why API calls with kubectl doesn't work when executed with Ansible?
I'm behind a proxy server, but k8smaster01 is set in no_proxy. Ansible got it, I printed $no_proxy in the task for testing purpose.
For curl I used -k since its a self signed cert from k8s. This sould harm kubectl (which itself works when not running from Ansible). It also doesn't work when calling kubectl --insecure-skip-tls-verify=true get node with Ansible.
I tried to unset the env variables from the proxy (since the proxy is required just for internet access) by setting empty environment variables:
- name: Kubectl test
become: false
shell: kubectl get no -v=10
environment:
http_proxy:
https_proxy:
no_proxy:
HTTP_PROXY:
HTTPS_PROXY:
NO_PROXY:
This was a bad idea, since curl (which seems to be used inside kubectl) parse this to None and fail. Strangely, kubectl failed with an dns error:
skipped caching discovery info due to Get https://k8smaster01:6443/api?timeout=32s: proxyconnect tcp: dial tcp: lookup None on 127.0.0.53:53: server misbehaving
Found out that the main problem was that I set NO_PROXY=$no_proxy in /etc/environment, where no_proxy contains the hostname k8smaster01. Since /etc/environment doesn't resolve bash variables, the uppercase NO_PROXY just contains $no_proxy as string. So it was enough to replace NO_PROXY=$no_proxy by the corresponding value (e.h. NO_PROXY=k8smaster01).
It wasn't an issue before, because most applications seems to follow the Linux specification of using lowercase environment variables for proxy usage.
On a local cluster (server listening on https://0.0.0.0:<any port>), put 0.0.0.0/8 in NO_PROXY, else kubectl (verify with kubectl cluster-info) will try to use your configured proxy.

Setup Kubernetes HA cluster with kubeadm and F5 as load-balancer

I'm trying to setup a Kubernetes HA cluster using kubeadm as installer and F5 as load-balancer (cannot use HAproxy). I'm experiencing issues with the F5 configuration.
I'm using self-signed certificates and passed the apiserver.crt and apiserver.key to the load balancer.
For some reasons the kubeadm init script fails with the following error:
[apiclient] All control plane components are healthy after 33.083159 seconds
I0805 10:09:11.335063 1875 uploadconfig.go:109] [upload-config] Uploading the kubeadm ClusterConfiguration to a ConfigMap
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0805 10:09:11.340266 1875 request.go:947] Request Body: {"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"kubeadm-config","namespace":"kube-system","creationTimestamp":null},"data":{"ClusterConfiguration":"apiServer:\n certSANs:\n - $F5_LOAD_BALANCER_VIP\n extraArgs:\n authorization-mode: Node,RBAC\n timeoutForControlPlane: 4m0s\napiVersion: kubeadm.k8s.io/v1beta2\ncertificatesDir: /etc/kubernetes/pki\nclusterName: kubernetes\ncontrolPlaneEndpoint: $F5_LOAD_BALANCER_VIP:6443\ncontrollerManager: {}\ndns:\n type: CoreDNS\netcd:\n local:\n dataDir: /var/lib/etcd\nimageRepository: k8s.gcr.io\nkind: ClusterConfiguration\nkubernetesVersion: v1.15.1\nnetworking:\n dnsDomain: cluster.local\n podSubnet: 192.168.0.0/16\n serviceSubnet: 10.96.0.0/12\nscheduler: {}\n","ClusterStatus":"apiEndpoints:\n lnxkbmaster02:\n advertiseAddress: $MASTER01_IP\n bindPort: 6443\napiVersion: kubeadm.k8s.io/v1beta2\nkind: ClusterStatus\n"}}
I0805 10:09:11.340459 1875 round_trippers.go:419] curl -k -v -XPOST -H "Accept: application/json, */*" -H "Content-Type: application/json" -H "User-Agent: kubeadm/v1.15.1 (linux/amd64) kubernetes/4485c6f" 'https://$F5_LOAD_BALANCER_VIP:6443/api/v1/namespaces/kube-system/configmaps'
I0805 10:09:11.342399 1875 round_trippers.go:438] POST https://$F5_LOAD_BALANCER_VIP:6443/api/v1/namespaces/kube-system/configmaps 403 Forbidden in 1 milliseconds
I0805 10:09:11.342449 1875 round_trippers.go:444] Response Headers:
I0805 10:09:11.342479 1875 round_trippers.go:447] Content-Type: application/json
I0805 10:09:11.342507 1875 round_trippers.go:447] X-Content-Type-Options: nosniff
I0805 10:09:11.342535 1875 round_trippers.go:447] Date: Mon, 05 Aug 2019 08:09:11 GMT
I0805 10:09:11.342562 1875 round_trippers.go:447] Content-Length: 285
I0805 10:09:11.342672 1875 request.go:947] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"configmaps is forbidden: User \"system:anonymous\" cannot create resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"","reason":"Forbidden","details":{"kind":"configmaps"},"code":403}
error execution phase upload-config/kubeadm: error uploading the kubeadm ClusterConfiguration: unable to create ConfigMap: configmaps is forbidden: User "system:anonymous" cannot create resource "configmaps" in API group "" in the namespace "kube-system"
The init is really basic:
kubeadm init --config=kubeadm-config.yaml --upload-certs
Here's the kubeadm-config.yaml:
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: stable
controlPlaneEndpoint: "$F5_LOAD_BALANCER_VIP:6443"
networking:
podSubnet: "192.168.0.0/16"
If I setup the cluster using a HAproxy the init runs smoothly:
#---------------------------------------------------------------------
# kubernetes
#---------------------------------------------------------------------
frontend kubernetes
bind $HAPROXY_LOAD_BALANCER_IP:6443
option tcplog
mode tcp
default_backend kubernetes-master-nodes
backend kubernetes-master-nodes
mode tcp
balance roundrobin
option tcp-check
server master01.my-domain $MASTER_01_IP:6443 check fall 3 rise 2
server master02.my-domain $MASTER_02_IP:6443 check fall 3 rise 2
server master03.my-domain $MASTER_03_IP:6443 check fall 3 rise 2
END
My solution has been to deploy the cluster without the proxy (F5) with a configuration as follows:
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: stable
controlPlaneEndpoint: "$MASTER_1_IP:6443"
networking:
podSubnet: "192.168.0.0/16"
Afterwards it was necessary to deploy on the cluster the F5 BIG-IP Controller for Kubernetes to manage the F5 device from Kubernetes.
Detailed guide can be found here:
https://clouddocs.f5.com/products/connectors/k8s-bigip-ctlr/v1.10/
Beware that it requires an additional F5 license and admin privileges.

error: the server doesn't have a resource type "svc"

Getting error: the server doesn't have a resource type "svc" when testing kubectl configuration whilst following this guide:
https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html
Detailed Error
$ kubectl get svc -v=8
I0712 15:30:24.902035 93745 loader.go:357] Config loaded from file /Users/matt.canty/.kube/config-test
I0712 15:30:24.902741 93745 round_trippers.go:383] GET https://REDACTED.yl4.us-east-1.eks.amazonaws.com/api
I0712 15:30:24.902762 93745 round_trippers.go:390] Request Headers:
I0712 15:30:24.902768 93745 round_trippers.go:393] User-Agent: kubectl/v1.10.3 (darwin/amd64) kubernetes/2bba012
I0712 15:30:24.902773 93745 round_trippers.go:393] Accept: application/json, */*
I0712 15:30:25.425614 93745 round_trippers.go:408] Response Status: 401 Unauthorized in 522 milliseconds
I0712 15:30:25.425651 93745 round_trippers.go:411] Response Headers:
I0712 15:30:25.425657 93745 round_trippers.go:414] Content-Type: application/json
I0712 15:30:25.425662 93745 round_trippers.go:414] Content-Length: 129
I0712 15:30:25.425670 93745 round_trippers.go:414] Date: Thu, 12 Jul 2018 14:30:25 GMT
I0712 15:30:25.426757 93745 request.go:874] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"Unauthorized","reason":"Unauthorized","code":401}
I0712 15:30:25.428104 93745 cached_discovery.go:124] skipped caching discovery info due to Unauthorized
I0712 15:30:25.428239 93745 round_trippers.go:383] GET https://REDACTED.yl4.us-east-1.eks.amazonaws.com/api
I0712 15:30:25.428258 93745 round_trippers.go:390] Request Headers:
I0712 15:30:25.428268 93745 round_trippers.go:393] Accept: application/json, */*
I0712 15:30:25.428278 93745 round_trippers.go:393] User-Agent: kubectl/v1.10.3 (darwin/amd64) kubernetes/2bba012
I0712 15:30:25.577788 93745 round_trippers.go:408] Response Status: 401 Unauthorized in 149 milliseconds
I0712 15:30:25.577818 93745 round_trippers.go:411] Response Headers:
I0712 15:30:25.577838 93745 round_trippers.go:414] Content-Type: application/json
I0712 15:30:25.577854 93745 round_trippers.go:414] Content-Length: 129
I0712 15:30:25.577868 93745 round_trippers.go:414] Date: Thu, 12 Jul 2018 14:30:25 GMT
I0712 15:30:25.578876 93745 request.go:874] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"Unauthorized","reason":"Unauthorized","code":401}
I0712 15:30:25.579492 93745 cached_discovery.go:124] skipped caching discovery info due to Unauthorized
I0712 15:30:25.579851 93745 round_trippers.go:383] GET https://REDACTED.yl4.us-east-1.eks.amazonaws.com/api
I0712 15:30:25.579864 93745 round_trippers.go:390] Request Headers:
I0712 15:30:25.579873 93745 round_trippers.go:393] Accept: application/json, */*
I0712 15:30:25.579879 93745 round_trippers.go:393] User-Agent: kubectl/v1.10.3 (darwin/amd64) kubernetes/2bba012
I0712 15:30:25.729513 93745 round_trippers.go:408] Response Status: 401 Unauthorized in 149 milliseconds
I0712 15:30:25.729541 93745 round_trippers.go:411] Response Headers:
I0712 15:30:25.729547 93745 round_trippers.go:414] Content-Type: application/json
I0712 15:30:25.729552 93745 round_trippers.go:414] Content-Length: 129
I0712 15:30:25.729557 93745 round_trippers.go:414] Date: Thu, 12 Jul 2018 14:30:25 GMT
I0712 15:30:25.730606 93745 request.go:874] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"Unauthorized","reason":"Unauthorized","code":401}
I0712 15:30:25.731228 93745 cached_discovery.go:124] skipped caching discovery info due to Unauthorized
I0712 15:30:25.731254 93745 factory_object_mapping.go:93] Unable to retrieve API resources, falling back to hardcoded types: Unauthorized
F0712 15:30:25.731493 93745 helpers.go:119] error: the server doesn't have a resource type "svc"
Screenshot of EKS Cluster in AWS
Version
kubectl version
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.3", GitCommit:"2bba0127d85d5a46ab4b778548be28623b32d0b0", GitTreeState:"clean", BuildDate:"2018-05-28T20:03:09Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"darwin/amd64"}
error: You must be logged in to the server (the server has asked for the client to provide credentials)
Config
Kubctl Config
$ kubectl config view
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: REDACTED
server: https://REDACTED.yl4.us-east-1.eks.amazonaws.com
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: aws
name: aws
current-context: aws
kind: Config
preferences: {}
users:
- name: aws
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
args:
- token
- -i
- test
command: heptio-authenticator-aws
env:
- name: AWS_PROFILE
value: personal
AWS Config
cat .aws/config
[profile personal]
source_profile = personal
AWS Credentials
$ cat .aws/credentials
[personal]
aws_access_key_id = REDACTED
aws_secret_access_key = REDACTED
 ~/.kube/config-test
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: REDACETED
server: https://REDACTED.yl4.us-east-1.eks.amazonaws.com
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: aws
name: aws
current-context: aws
kind: Config
preferences: {}
users:
- name: aws
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
args:
- token
- -i
- test
command: heptio-authenticator-aws
env:
- name: AWS_PROFILE
value: personal
Similar issues
error-the-server-doesnt-have-resource-type-svc
the-connection-to-the-server-localhost8080-was-refused-did-you-specify-the-ri
I just had a similar issue which I managed to resolve with aws support. The issue I was having was that the cluster was created with a role that was assumed by the user, but kubectl was not assuming this role with the default kube config created by the aws-cli.
I fixed the issue by providing the role in the users section of the kube config
users:
- name: aws
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
args:
- token
- -i
- test
- -r
- <arn::of::your::role>
command: aws-iam-authenticator
env:
- name: AWS_PROFILE
value: personal
I believe the heptio-aws-authenticator has now been changed to the aws-iam-authenticator, but this change was what allowed me to use the cluster.
The 401s look like a permissions issue. Did your user create the cluster?
In the docs: "When you create an Amazon EKS cluster, the IAM entity (user or role) is automatically granted system:master permissions in the cluster's RBAC configuration. To grant additional AWS users the ability to interact with your cluster, you must edit the aws-auth ConfigMap within Kubernetes."
If it was created by a different user, you'll need to use that user, having it configured in the CLI to execute kubectl
Just delete cache and http-cache in .kube folder and try running the command
kubectl get svc
Also make sure that your config- file is properly indented. Due to syntax errors sometime it may throw that error.
Need to make sure the credentials used to create Cluster and execute kubectl in CLI are same. In my case I created cluster via console which took AWS temporary vending machine credentials that has expiry where as kubectl used the actual permanent credentials.
To fix the error, I created the cluster as well from the AWS CLI.
I had this issue where my KUBECONFIG environment variable had more than one value, it looked something like:
:/Users/my-user/.kube/config-firstcluster:/Users/my-user/.kube/config-secondcluster
Try unsetting and resetting the environment variable to have only 1 value and see if that works for you.
Possible solution if you created the cluster in the UI
If you created the cluster in the UI, it's possible the AWS root user created the cluster. According to the docs, "When an Amazon EKS cluster is created, the IAM entity (user or role) that creates the cluster is added to the Kubernetes RBAC authorization table as the administrator (with system:master) permissions. Initially, only that IAM user can make calls to the Kubernetes API server using kubectl. "
You'll need to first login to the AWS CLI as the root user in order to update the permissions of the IAM user you want to have access to the cluster.
You'll need to get an access key for the root user and put this info in .aws/credentials under the default user. You can do this using the command aws configure
Now kubectl get svc works, since you're logged in as the root user that initially created the cluster.
Apply the aws-auth ConfigMap to the cluster. Follow step 2 from these docs, using the NodeInstanceRole value you got as the Output from Step 3: Launch and Configure Amazon EKS Worker Nodes
To add a non-root IAM user or role to an Amazon EKS cluster, follow step 3 from these docs.
Edit the configmap/aws-auth and add other users that need kubectl access in the mapUsers section.
Run aws configure again and add the access key info from your non-root user.
Now you can access your cluster from the AWS CLI and using kubectl.
I ran into this error, and it was a DIFFERENT kube config issue, so the
error: the server doesn't have a resource type “svc”
error is probably very generic.
Im my case, the solution was to remove the quotes around the certificate-authority-data
Example
(not working)
certificate-authority-data:"xyxyxyxyxyxy"
(working)
certificate-authority-data: xyxyxyxyxyxy
I had a similar issue where was not able to list any of the kubernetes objects using kubectl. I tried following commands but I got the same "error: the server doesn't have a resource type object_name"
kubectl get pod
kubectl get service
kubectl get configmap
kubectl get namespace
I checked my k8s dashboard, it was working fine for me. Hence, I understood that there is a problem when kubectl is trying to make a connection with kube-apiserver. I decided to curl apiserver with existing certificates but it required certificate key and crt file. By default, kubectl reads the config from $HOME/.kube/config and look for context. In case of multiple clusters, check value of current-context: your_user#cluster_name. In the users section, check your_user and save the value of client-certificate/client-certificate-data and client-key/client-key-data in a file after following steps.
echo "value of client-certificate-data" | base64 --decode > your_user.crt
echo "value of client-key-data" | base64 --decode > your_user.key
#check the validality of certificate
openssl x509 -in your_user.crt -text
If the certificate had expired then create a new certificate and try to authenticate
openssl genrsa -out your_user.key 2048
openssl req -new -key your_user.key -subj "/CN=check_cn_from_existing_certificate_crt_file" -out your_user.csr
openssl x509 -req -in your_user.csr -CA /$PATH/ca.crt -CAkey /$PATH/ca.key -out your_user.crt -days 30
# Get the apiserver ip
APISERVER=$(cat ~/.kube/config | grep server | cut -f 2- -d ":" | tr -d " ")
# Authenticate with apiserver using curl command
curl $APISERVER/api/v1/pods \
--cert your_user.crt \
--key your_user.key \
--cacert /$PATH/ca.crt
If you are able to see pods then update the certificate in config file
Final output of $HOME/.kube/config
apiVersion: v1
clusters:
- cluster:
certificate-authority: /$PATH/ca.crt
server: https://192.168.0.143:8443 ($APISERVER)
name: cluster_name
contexts:
- context:
cluster: cluster_name
user: your_user
name: your_user#cluster_name
current-context: your_user#cluster_name
kind: Config
preferences: {}
users:
- name: your_user
user:
client-certificate: /$PATH/your_user.crt
client-key: /$PATH/your_user.key
Now, you should successfully able to list pod or other resources using kubectl