Gitlab deploy to Kubernetes namespace not allowed - kubernetes

Using Gitlab+Kubernetes, how to deploy something to a specific (e.g. test) namespace? I've followed the Gitlab doc, but I can't find how to select a specific namespace when deploying.
This .gitlab-ci.yml file...
stages:
- deploy
deploy:
stage: deploy
tags: [local]
environment:
name: test
kubernetes:
namespace: test
script:
- kubectl config get-contexts
- kubectl apply -f nginx.yaml
- kubectl get pods --namespace deploy-2-test
- kubectl apply -f nginx.yaml --namespace test
...produces this result:
on rap N37D1QxB
Preparing the "shell" executor 00:00
Using Shell executor...
Preparing environment 00:00
... [everything fine until here]
Executing "step_script" stage of the job script 00:00
$ kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* gitlab-deploy gitlab-deploy gitlab-deploy deploy-2-test
$ kubectl apply -f nginx.yaml
deployment.apps/nginx-deployment created
$ kubectl get pods --namespace deploy-2-test
NAME READY STATUS RESTARTS AGE
nginx-deployment-66b6c48dd5-4lx4s 0/1 ContainerCreating 0 0s
nginx-deployment-66b6c48dd5-dcpcr 0/1 ContainerCreating 0 0s
$ kubectl apply -f nginx.yaml --namespace test
Error from server (Forbidden): error when retrieving current configuration of:
Resource: "apps/v1, Resource=deployments", GroupVersionKind: "apps/v1, Kind=Deployment"
Name: "nginx-deployment", Namespace: "test"
from server for: "nginx.yaml": deployments.apps "nginx-deployment" is forbidden: User "system:serviceaccount:deploy-2-test:deploy-2-test-service-account" cannot get resource "deployments" in API group "apps" in the namespace "test"
Cleaning up file based variables 00:00
ERROR: Job failed: exit status 1
Notice that the deployment is done on the deploy-2-test namespace, even if the .gitlab-ci.yml file points to the test namespace; and if the --namespace is included in the deploy command, there's no right to deploy.
Following the Gitlab doc, I've added the cluster-admin Cluster Role
to the gitlab ServiceAccount, which should be allmighty...
The nginx deployment is the classic one. How to deploy to the test namespace? why and how is the namespace deploy-2-test generated?

Found the solution: just disable the option GitLab-managed cluster in the Gitlab cluster definition page.
Excerpt from the output:
...
$ kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* gitlab-deploy gitlab-deploy gitlab-deploy test
$ kubectl apply -f nginx.yaml
deployment.apps/nginx-deployment created
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-deployment-66b6c48dd5-55m6p 0/1 ContainerCreating 0 0s
nginx-deployment-66b6c48dd5-vbhtc 0/1 ContainerCreating 0 0s
Cleaning up file based variables
Job succeeded
Effectively, the deploy.environment.kubernetes.namespace is the one defining the final k8s namespace.

I'm not 100% sure, but setting the environment:kubernetes:namespace setting might not change your current context. It only applies that value to the KUBE_NAMESPACE environment variable.
If you want to be sure you can always use --namespace $KUBE_NAMESPACE in your scripts. That's what we do too, to prevent any context issues with our scripts.

Per the instructions, there's a - kubectl config use-context line missing from your script after - kubectl config get-contexts. With this in place, --namespace works for me.

Related

Pod is not found when trying to delete, however, can be patched

I have a pod that I can see on GKE. But if I try to delete them, I got the error:
kubectl delete pod my-pod --namespace=kube-system --context=cluster-1
Error from server (NotFound): pods "my-pod" not found
However, if I try to patch it, the operation was completed successfully:
kubectl patch deployment my-pod --namespace kube-system -p "{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"secrets-update\":\"`date +'%s'`\"}}}}}" --context=cluster-1
deployment.apps/my-pod patched
Same namespace, same context, same pod. Why kubectl fails to delete the pod?
kubectl patch deployment my-pod --namespace kube-system -p "{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"secrets-update\":\"`date +'%s'`\"}}}}}" --context=cluster-1
You are patching the deployment here, not the pod.
Additionally, your pod will not be called "my-pod" but would be called the name of your deployment plus a hash (random set of letters and numbers), something like "my-pod-ace3g"
To see the pods in the namespace use
kubectl get pods -n {namespace}
Since you've put the deployment in the "kube-system" namespace, you would use
kubectl get pods -n kube-system
Side note: Generally don't use the kube-system namespace unless your deployment is related to the cluster functionality. There's a namespace called default you can use to test things

Kubectl config within AKS

I have a cluster in Azure (AKS), in this cluster, I have 2 pools: a System one and a User one to run apps.
When executing kubectl get pod command inside a pod, I get the error:
The connection to the server localhost:8080 was refused - did you specify the right host or port?
the pod is running on the system node(not working on the user node either) in one my own namespace(let's call it cron)
But when running the same command in a pod belonging to the namespace kube-system on the system node, it's working fine.
it looks like to be link to the configuration of kubectl (kubeconfig) but I don't get how it works in the kube-system namespace and not the cron one
what did I miss in AKS to not be able to run kubectl command in pod not belonging to kube-system namespace?
Edit1:
Environment variables are different especially those linked to Kubernetes:
On the pod running under kube-system got the URL:
KUBERNETES_SERVICE_PORT_HTTPS=443
KUBERNETES_SERVICE_PORT=443
KUBE_DNS_PORT=udp://10.0.0.10:53
KUBE_DNS_PORT_53_TCP_ADDR=10.0.0.10
KUBERNETES_DASHBOARD_PORT_443_TCP_PORT=443
KUBERNETES_DASHBOARD_SERVICE_PORT=443
KUBE_DNS_SERVICE_HOST=10.0.0.10
KUBERNETES_PORT_443_TCP=tcp://*****.hcp.japaneast.azmk8s.io:443
KUBE_DNS_PORT_53_TCP_PORT=53
KUBE_DNS_PORT_53_UDP=udp://10.0.0.10:53
KUBE_DNS_PORT_53_UDP_PROTO=udp
KUBE_DNS_SERVICE_PORT_DNS=53
KUBE_DNS_PORT_53_TCP_PROTO=tcp
KUBE_DNS_PORT_53_UDP_ADDR=10.0.0.10
KUBERNETES_DASHBOARD_PORT_443_TCP_ADDR=10.0.207.97
KUBE_DNS_SERVICE_PORT_DNS_TCP=53
KUBERNETES_DASHBOARD_PORT_443_TCP=tcp://10.0.207.97:443
KUBERNETES_DASHBOARD_PORT=tcp://10.0.207.97:443
KUBERNETES_PORT_443_TCP_PROTO=tcp
KUBE_DNS_PORT_53_UDP_PORT=53
KUBERNETES_PORT_443_TCP_ADDR=****.hcp.japaneast.azmk8s.io
KUBERNETES_SERVICE_HOST=*****.hcp.japaneast.azmk8s.io
KUBERNETES_PORT=tcp://*****.hcp.japaneast.azmk8s.io:443
KUBERNETES_PORT_443_TCP_PORT=443
KUBE_DNS_PORT_53_TCP=tcp://10.0.0.10:53
KUBERNETES_DASHBOARD_PORT_443_TCP_PROTO=tcp
KUBE_DNS_SERVICE_PORT=53
KUBERNETES_DASHBOARD_SERVICE_HOST=10.0.207.97
In my own env cron
KUBERNETES_SERVICE_PORT_HTTPS=443
KUBERNETES_SERVICE_PORT=443
KUBERNETES_PORT_443_TCP=tcp://10.0.0.1:443
KUBERNETES_PORT_443_TCP_PROTO=tcp
KUBERNETES_PORT_443_TCP_ADDR=10.0.0.1
KUBERNETES_SERVICE_HOST=10.0.0.1
KUBERNETES_PORT=tcp://10.0.0.1:443
KUBERNETES_PORT_443_TCP_PORT=443
About namespaces. I labeled my own namespace with the same labels as kube-system but no luck.
about the config: both kubectl config view are coming back empty (requested inside the pod):
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

kubernetes: How to set active namespace for all kubectl commands?

I am working on kubernetes cluster. In my cluster i am having 3 namespaces.
Default
Staging
Production
At a time when i want to work on staging namespace.
In every kubectl command i have to pass namespace
kubectl get pods -n staging
kubectl get deployment -n staging
Is there any way to set active namespace at a time?
kubectl config set-context --current --namespace=<insert-namespace-name-here>
# Validate it
kubectl config view --minify | grep namespace:
Ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/#setting-the-namespace-preference
kubectl config set-context --current --namespace=<insert-namespace-name-here>
Refer here
Also you can use kubectx plugin

How to debug why my pods are pending in GCE

I'#m trying to get a pod running on GCE. The pod has an init container, and is created by me applying a manifest with a deployment that creates 1 replica of the pod.
When I look at my workloads on the cloud console, I can see that under 'Active revisions' my deployment is in the state of 'Pods are pending', and under 'Managed pods', the status is 'PodsInitializing'.
The container logs are empty, and the audit logs contain a single entry for the creation of the deployment.
My pods seem to be stuck in the above state, and I'm not really sure why. How do I go about debugging that?
Edit:
kubectl get pods --namespace=my-namespace
Outputs:
NAME READY STATUS RESTARTS AGE
my-pod-v77jm 0/1 Init:0/1 0 55m
But when I run:
kubectl describe pod my-pod-v77jm
I get
Error from server (NotFound): pods "my-pod-v77jm" not found
If you have access to kube-api via kubectl:
Use describe see details about the pod and containers
kubectl describe myPod --namespace mynamespace
To view container logs (including init containers)
kubectl logs myPod --namespace mynamespace -c initContainerName
You can get more information about pod statuses and how to debug init containers here

Kubernetes in Azure cannot display CPU usage for HPA then cannot perform auto-scaling

I tried solutions from this link but failed to get CPU usage (it still show <unknown>).
Below are steps that I performed:
Clone the metrics-server github repo: git clone https://github.com/kubernetes-incubator/metrics-server.git
Add below lines under "imagePullPolicy" in metrics-server-deployment.yaml:
command:
- /metrics-server
- --metric-resolution=30s
- --kubelet-insecure-tls
- --kubelet-preferred-address-types=InternalIP
Go into directory cd deploy/1.8+ and run following yaml files:
kubectl apply -f aggregated-metrics-reader.yaml
kubectl apply -f auth-reader.yaml
kubectl apply -f auth-delegator.yaml
kubectl apply -f metrics-apiservice.yaml
kubectl apply -f resource-reader.yaml
kubectl apply -f metrics-server-deployment.yaml
kubectl apply -f metrics-server-service.yaml
4.a) Run sample pod:
kubectl run --generator=run-pod/v1 php-apache --image=k8s.gcr.io/hpa-example --requests=cpu=200m --expose --port=80
Faced error when create HPA:
kubectl autoscale deployment php-apache --cpu-percent=50 --min=1 --max=10 horizontalpodautoscaler.autoscaling/php-apache autoscaled
error: there is no need to specify a resource type as a separate argument when passing arguments in resource/name form (e.g. 'kubectl get resource/' instead of 'kubectl get resource resource/'
4.b) Run my pod and recreate HPA
--> still display in CPU usage when "kubectl get hpa"
How can I set to retrieve CPU usage properly?
Provide more information for HPA below: