Unauthorize to run apply Kubernetes via gitLab-runner - kubernetes

I have set up deployment staage in .gitlab-ci.yml as here
deploy_internal_dev:
stage: deploy_internal
only:
- master
image: image
environment:
name: Dev
url: url
script:
- pwd
- whoami
- kubectl config set-context --current --namespace=xxx
- kubectl delete -f kube/kube-dev/deployment.yml --now --timeout=100s || { echo "gracefull delete failed" ; kubectl delete -f kube/kube-dev/deployment.yml --grace-period=0 --force ; } || true
- kubectl apply -f kube/kube-dev
tags:
- development
dependencies: [ ]
This worked well OK previously, but yesterday. The runner is no longer authorized to apply these commands.it says
Executing "step_script" stage of the job script
Using docker image sha256:88fd9345c2d8e3a95a9b1f792c3f330e7e529b7c217ee1d607ef9cb2a62288ca for docker.xxxx.net/xxx/kubectl-dev:1.0.0 with digest docker.xxx.net/xxxx/kubectl-dev#sha256:73548cd419ff37db648cb88285c4fc6dc1b3c9ab1addc7a050b2866e5f51bb78 ...
$ pwd
/builds/xxx-ckdu/xxx-api
$ whoami
root
$ kubectl config set-context --current --namespace=xxx
Context "kubernetes-admin#kubernetes" modified.
$ kubectl delete -f kube/kube-dev/deployment.yml --now --timeout=100s || { echo "gracefull delete failed" ; kubectl delete -f kube/kube-dev/deployment.yml --grace-period=0 --force ; } || true
error: unable to recognize "kube/kube-dev/deployment.yml": Unauthorized
gracefull delete failed
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
error: unable to recognize "kube/kube-dev/deployment.yml": Unauthorized
$ kubectl apply -f kube/kube-dev
error: You must be logged in to the server (the server has asked for the client to provide credentials)
ERROR: Job failed: exit code 1
but I could run these commands and apply k8s config via server access ( ssh ). I don't know what I'm missing here.

Related

Edit kubernetes resource using kubectl run --command

I am trying to create a pod run a command edit an exist resource , but its not working
My CR is
apiVersion: feature-toggle.resource.api.sap/v1
kind: TestCR
metadata:
name: test
namespace: my-namespace
spec:
enabled: true
strategies:
- name: tesst
parameters:
perecetage: "10"
The command I am trying to run is
kubectl run kube-bitname --image=bitnami/kubectl:latest -n my-namespace --command -- kubectl get testcr test -n my-namespace -o json | jq '.spec.strategies[0].parameters.perecetage="66"' | kubectl apply -f -
But This not work ? any idea ?
It would be better if you post more info about the error o the trace that are you getting executing the command, but I have a question that could be a good insight about what is happening here.
Has the kubectl command that you are running inside the bitnami/kubectl:latest any context that allow it to connect to your cluster?
If you take a look into the kubectl docker hub documentation you can see that you should map a config file to the pod in order to connect to your own cluster.
$ docker run --rm --name kubectl -v /path/to/your/kube/config:/.kube/config bitnami/kubectl:latest

How to configure kubectl to act as a service account?

I wish to run a Drone CI/CD pipeline on a Raspberry Pi, including a stage to update a Kubernetes Deployment. Unfortunately, all the pre-built solutions that I've found for doing so (e.g. 1, e.g. ) are not built for arm64 architecture, so I believe I need to build my own.
I am attempting to adapt the commands from here (see also README.md, which describes the authorization required), but my attempt to contact the cluster still fails with authorization problems:
$ cat service-account-definition.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: drone-demo-service-account
namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: drone-demo-service-account-clusterrolebinding
subjects:
- kind: ServiceAccount
name: drone-demo-service-account
namespace: default
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
$ kubectl apply -f service-account-definition.yaml
serviceaccount/drone-demo-service-account created
clusterrolebinding.rbac.authorization.k8s.io/drone-demo-service-account-clusterrolebinding created
$ kubectl get serviceaccount drone-demo-service-account
NAME SECRETS AGE
drone-demo-service-account 1 10s
$ kubectl get secret $(kubectl get secrets | grep 'drone-demo-service-account-token' | cut -f1 -d' ') -o jsonpath='{.data.ca\.crt}' > secrets/cert
$ head -c 10 secrets/cert
LS0tLS1CRU%
$ kubectl get secret $(kubectl get secrets | grep 'drone-demo-service-account-token' | cut -f1 -d' ') -o jsonpath='{.data.token}' | base64 > secrets/token
$ head -c 10 secrets/token
WlhsS2FHSk%
$ cat Dockerfile
FROM busybox
COPY . .
CMD ["./script.sh"]
$ cat script.sh
#!/bin/sh
server=$(cat secrets/server) # Pre-filled
cert=$(cat secrets/cert)
# Added this `tr` call, which is not present in the source I'm working from, after noticing that
# the file-content contains newlines
token=$(cat secrets/token | tr -d '\n')
echo "DEBUG: server is $server, cert is $(echo $cert | head -c 10)..., token is $(echo $token | head -c 10)..."
# Cannot depend on the binami/kubectl image (https://hub.docker.com/r/bitnami/kubectl), because
# it's not available for arm64 - https://github.com/bitnami/charts/issues/7305
wget https://storage.googleapis.com/kubernetes-release/release/v1.19.2/bin/linux/arm64/kubectl
chmod +x kubectl
./kubectl config set-credentials default --token=$token
echo $cert | base64 -d > ca.crt
./kubectl config set-cluster default --server=$server --certificate-authority=ca.crt
./kubectl config set-context default --cluster=default --user=default
./kubectl config use-context default
echo "Done with setup, now cat-ing .kube/config"
echo
cat $HOME/.kube/config
echo "Attempting to get pods"
echo
./kubectl get pods
$ docker build -t stack-overflow-testing . && docker run stack-overflow-testing
Sending build context to Docker daemon 10.75kB
Step 1/3 : FROM busybox
---> 3c277069c6ae
Step 2/3 : COPY . .
---> 74c6a132d255
Step 3/3 : CMD ["./script.sh"]
---> Running in dc55f33f74bb
Removing intermediate container dc55f33f74bb
---> dc68a5d6ba9b
Successfully built dc68a5d6ba9b
Successfully tagged stack-overflow-testing:latest
DEBUG: server is https://rassigma.avril:6443, cert is LS0tLS1CRU..., token is WlhsS2FHSk...
Connecting to storage.googleapis.com (142.250.188.16:443)
wget: note: TLS certificate validation not implemented
saving to 'kubectl'
kubectl 18% |***** | 7118k 0:00:04 ETA
kubectl 43% |************* | 16.5M 0:00:02 ETA
kubectl 68% |********************** | 26.2M 0:00:01 ETA
kubectl 94% |****************************** | 35.8M 0:00:00 ETA
kubectl 100% |********************************| 38.0M 0:00:00 ETA
'kubectl' saved
User "default" set.
Cluster "default" set.
Context "default" created.
Switched to context "default".
Done with setup, now cat-ing .kube/config
apiVersion: v1
clusters:
- cluster:
certificate-authority: /ca.crt
server: https://rassigma.avril:6443
name: default
contexts:
- context:
cluster: default
user: default
name: default
current-context: default
kind: Config
preferences: {}
users:
- name: default
user:
token: WlhsS2FHSkhZM[...REDACTED]
Attempting to get pods
error: You must be logged in to the server (Unauthorized)
If I copy the ~/.kube/config from my laptop to the docker container, kubectl commands succeed as expected - so, this isn't a networking issue, just an authorization one. I do note that my laptop-based ~/.kube/config lists client-certificate-data and client-key-data rather than token under users: user:, but I suspect that's because my base config is recording a non-service-account.
How can I set up kubectl to authorize as a service account?
Some reading I have done that didn't answer the question for me:
kubenetes documentation on AuthN/AuthZ
Google Kubernetes Engine article on service accounts
Configure Service Accounts for Pods (this described how to create and associate the accounts, but not how to act as them)
Two blog posts (1, 2) that refer to Service Accounts
It appears you have used | base64 instead of | base64 --decode

How do I redirect Ansible to use files in a role directory?

Salutations, I am deploying pods/applications to EKS via Ansible. My playbook runs a few kubectl apply -f commands in order to deploy EKS resources and all of the .yaml files are in that directory.
I would like to place these .yaml files that create each application in it's own ansible role/files directory in order to clean up the main ansible directory a bit (the .yaml files are becoming overwhelming and I only have two applications being deployed thus far).
The issue is this: When I move the .yaml files to it's respective /roles/files directory ansible still seems to look for the files in the main ansible directory instead of scanning the internal role directory.
How do I redirect Ansible to run the shell commands on .yamls in the role's file directory? Playbook below:
#
# Deploying Jenkins to AWS EKS
#
# Create Jenkins Namespace
- name: Create Jenkins Namespace & set it to default
shell: |
kubectl create namespace jenkins
kubectl config set-context --current --namespace=jenkins
# Create Jenkins Service Account
- name: Create Jenkins Service Account
shell: |
kubectl create serviceaccount jenkins-master -n jenkins
kubectl get secret $(kubectl get sa jenkins-master -n jenkins -o jsonpath={.secrets[0].name}) -n jenkins -o jsonpath={.data.'ca\.crt'} | base64 --decode
# Deploy Jenkins
- name: Deploy Jenkins Application
shell: |
kubectl apply -f jenkins-service.yaml
kubectl apply -f jenkins-vol.yaml
kubectl apply -f jenkins-role.yaml
kubectl apply -f jenkins-configmap.yaml
kubectl apply -f jenkins-deployment.yaml
Below is the role directory structure, Ansible doesn't check this location for the yaml files to run in the playbook above.
You could use the role_path variable, which contains the path to the currently executing role. You could write your tasks like:
- name: Deploy Jenkins Application
shell: |
kubectl apply -f {{ role_path }}/files/jenkins-service.yaml
kubectl apply -f {{ role_path }}/files/jenkins-vol.yaml
...
Alternately, a fileglob lookup might be easier:
- name: Deploy Jenkins Application
command: kubectl apply -f {{ item }}
loop: "{{ query('fileglob', '*.yaml') }}"
This would loop over all the *.yaml files in your role's files
directory.
You could consider replacing your use of kubectl with
the k8s module.
Lastly, rather than managing these resources using Ansible, you could
consider using kustomize, which I have found
to be easier to work with unless you're relying heavily on Ansible
templating.

kubectl Please enter Username: error: EOF

I started getting an error in my CI process Please enter Username: error: EOF when running kubectl commands.
The kubectl version matches the cluster version, and I can run the same commands fine from my machine with the same configuration shown by kubectl config view.
Here are the logs:
+ kubectl config view
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: REDACTED_FOR_QUESTION
name: REDACTED_FOR_QUESTION
contexts:
- context:
cluster: REDACTED_FOR_QUESTION
user: REDACTED_FOR_QUESTION
name: REDACTED_FOR_QUESTION
current-context: REDACTED_FOR_QUESTION
kind: Config
preferences: {}
users:
- name: REDACTED_FOR_QUESTION
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
++ echo 'Starting Kube proxy - port is 8001'
++ set +e
++ echo 'using kubectl'
++ sleep 1
++ kubectl proxy --port=8001
error: EOF
++ cat backingfile
++ rm backingfile
++ set -e
+ echo Starting Kube proxy - port is 8001 using kubectl Please enter Username:
Starting Kube proxy - port is 8001 using kubectl Please enter Username:
+ kubectl version
Please enter Username: error: EOF
Exited with code 1
What I am doing in my script is the following:
echo "using kubectl"
kubectl proxy --port=${KUBECTL_PROXY_PORT} > backingfile &
sleep 1 && cat backingfile && rm backingfile
set -e
As this allowed me to launch kubectl in the background but still get the logs of the command.
What is causing this error and how can I run the command successfully again? Please enter Username: error: EOF
It looks like you are pointing to the different KUBECONFIG files:
1. You should verify your KUBECONFIG varaible,
2. You can add in your script --kubeconfig=absolute_path_to_the_KUBECONFIG_file flag
3 You combine those above with kubectl config --kubeconfig=XXXX use-context xxxx
Please follow Define clusters, users, and contexts
Hope this help.

Kubernetes context is not set

I have this config file
apiVersion: v1
clusters:
- cluster:
server: [REDACTED] // IP of my cluster
name: staging
contexts:
- context:
cluster: staging
user: ""
name: staging-api
current-context: staging-api
kind: Config
preferences: {}
users: []
I run this command
kubectl config --kubeconfig=kube-config use-context staging-api
I get this message
Switched to context "staging-api".
I then run
kubectl get pods
and I get this message
The connection to the server localhost:8080 was refused - did you specify the right host or port?
As far as I can tell from the docs
https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/
I'm doing it right. Am I missing something?
Yes, Try the following steps to access the kubernetes cluster. This steps assumes that you have your k8s certificates in /etc/kubernetes.
You need to setup the cluster name, Kubeconfig, User and Kube cert file in following variables and then simply run those commands:
CLUSTER_NAME="kubernetes"
KCONFIG=admin.conf
KUSER="kubernetes-admin"
KCERT=admin
cd /etc/kubernetes/
$ kubectl config set-cluster ${CLUSTER_NAME} \
--certificate-authority=pki/ca.crt \
--embed-certs=true \
--server=https://${KUBERNETES_PUBLIC_ADDRESS}:6443 \
--kubeconfig=${KCONFIG}
$ kubectl config set-credentials kubernetes-admin \
--client-certificate=admin.crt \
--client-key=admin.key \
--embed-certs=true \
--kubeconfig=/etc/kubernetes/admin.conf
$ kubectl config set-context ${KUSER}#${CLUSTER_NAME} \
--cluster=${CLUSTER_NAME} \
--user=${KUSER} \
--kubeconfig=${KCONFIG}
$ kubectl config use-context ${KUSER}#${CLUSTER_NAME} --kubeconfig=${KCONFIG}
$ kubectl config view --kubeconfig=${KCONFIG}
After this you will be able to access the cluster. Hope this helps.
You need to fetch the credentials of the running cluster. Try this:
gcloud container clusters get-credentials <cluster_name> --zone <zone_name>
More info:
https://cloud.google.com/sdk/gcloud/reference/container/clusters/get-credentials
I've got the same problem like mentioned in the title.
When I executed:
kubectl config current-context
The output was:
error: current-context is not set
And in my case it was indentation problem.
One white-space before current-context caused me a few hours of debugging:
contexts:
- context:
cluster: arn:aws:eks:us-east-2:...:cluster/...
user: arn:aws:eks:us-east-2:...:cluster/...
name: arn:aws:eks:us-east-2:...:cluster/...
current-context: arn:aws:eks:us-east-2:...:cluster/... <-Whitespace at the begging of the row was the source of the error.
I had the same issue on a mac m1...
The problem was that i am using kubectx and kubens, so that tools are ones that are controlling context and namespace.
In this situation The correct command has to be
kubectx staging-api
More information on the Official Repository