Deploying an app to gke from CI - kubernetes

I use gitlab for my CI, they host it and i have my own runners.
I have a k8s cluster running in gke.
I want to use kubectl apply to deploy new versions of my containers.
This all works from my local machine because it uses my google account.
I tried setting this all up as suggested by k8s and gitlab
1. copy over the ca.crt
2. copy over the token
- echo "$KUBE_CA_PEM" > kube_ca.pem
- kubectl config set-cluster default-cluster --server=$KUBE_URL --certificate-authority="$(pwd)/kube_ca.pem"
- kubectl config set-credentials default-admin --token=$KUBE_TOKEN
- kubectl config set-context default-system --cluster=default-cluster --user=default-admin
- kubectl config use-context default-system
When i do this it fails with x509: certificate signed by unknown authority
I tried going to the google cloud console > cluster > show credentials and instead of the token specify the username and password that it shows me there, this fails with the same error.
Finally i tried using the --insecure-skip-tls-verify=true but then it complains error: You must be logged in to the server (the server has asked for the client to provide credentials)
Any Help would be appreciated.

The cause of this problem was an incorrect server url. The server needs to be the one defined on the cluster information page in the google cloud console. You will find an Endpoing ip address.

Related

Cannot install Helm chart when accessing GKE cluster directly

I've set up a basic GKE cluster using Autopilot settings. I am able to install Helm charts on it using kubectl with proper kubeconfig pointing to the GKE cluster.
I'd like to do the same without the kubeconfig, by providing the cluster details with relevant parameters.
To do that I'm running a docker container using alpine/helm image and passing the paramtrised command which looks like this:
docker run --rm -v $(pwd):/chart alpine/helm install <my_chart_name> /chart --kube-apiserver <cluster_endpoint> --kube-ca-file /chart/<cluster_certificate_file> --kube-as-user <my_gke_cluster_username> --kube-token <token>
unfortunately it returns :
Error: INSTALLATION FAILED: Kubernetes cluster unreachable: Get "http://<cluster_endpoint>/version": dial tcp <cluster_endpoint>:80: i/o timeout
Is this even doable with GKE?
One challenge will be that GKE leverages a plugin (currently built in to kubectl itself but soon the standlone gke-gcloud-auth-plugin) to obtain an access token for the default gcloud user.
This token expires hourly.
If you can, it would be better to mount the kubeconfig (${HOME}/.kube/config) file into the container as it should (!) then authenticate as if it were kubectl which will not only leverage the access token correctly but will renew it as appropriate.
https://github.com/alpine-docker/helm
docker run \
--interactive --tty --rm \
--volume=${PWD}/.kube:/root/.kube \
--volume=${PWD}/.helm:/root/.helm \
--volume=${PWD}/.config/helm:/root/.config/helm \
--volume=${PWD}/.cache/helm:/root/.cache/helm \
alpine/helm ...
NOTE It appears there are several (.helm, .config and .cache) other local paths that may be required too.
Problem solved! A more experienced colleague has found the solution.
I should have used the address including "http://" protocol specification. That however still kept returning "Kubernetes cluster unreachable: " error, with "unknown" details instead.
I had been using incorect username. Instead the one from kubeconfig file, a new service account should be created and its name used instead in a form system:serviceaccount:<namespace>:<service_account>. However that would not alter the error either.
The service account lacked proper role, following command did the job: kubectl create rolebinding --clusterrole=cluster-admin --serviceaccount=<namespace>:<service_account>. Ofc, cluster-admin might now be the role we want to give away freely.

DO Kubernetes Cluster + GCP Container Registry

I have a Kubernetes cluster in Digital Ocean, I want to pull the images from a private repository in GCP.
I tried to create a secret that make me able to to pull the images following this article https://blog.container-solutions.com/using-google-container-registry-with-kubernetes
Basically, these are the steps
In the GCP account, create a service account key, with a JSON credential
Execute
kubectl create secret docker-registry gcr-json-key \
--docker-server=gcr.io \
--docker-username=_json_key \
--docker-password="$(cat ~/json-key-file.json)" \
--docker-email=any#valid.email
In the deployment yaml reference the secret
imagePullSecrets:
- name: gcr-json-key
I don't understand why I am getting 403. If there are some restriccions to use the registry outside google cloud, or if I missed some configuration something.
Failed to pull image "gcr.io/myapp/backendnodeapi:latest": rpc error: code = Unknown desc = failed to pull and unpack image "gcr.io/myapp/backendnodeapi:latest": failed to resolve reference "gcr.io/myapp/backendnodeapi:latest": unexpected status code [manifests latest]: 403 Forbidden
Verify that you have enabled the Container Registry API, Installed Cloud SDK and Service account you are using for authentication has permissions to access Container Registry.
Docker requires privileged access to interact with registries. On Linux or Windows, add the user that you use to run Docker commands to the Docker security group.
This documentation has details on prerequisites for container registry.
Note:
Ensure that the version of kubectl is the latest version.
I tried replicating by following the document you provided and it worked at my end, So ensure that all the prerequisites are met.
That JSON string is not a password.
The documentation suggests to either activate the service account:
gcloud auth activate-service-account [USERNAME]#[PROJECT-ID].iam.gserviceaccount.com --key-file=~/service-account.json
Or add the configuration to $HOME/.docker/config.json
And then run docker-credential-gcr configure-docker.
Kubernetes seems to demand a service-account token secret
and this requires annotation kubernetes.io/service-account.name.
Also see Configure Service Accounts for Pods.

Promote image across openshift clusters

I'm trying to work out that if an image change trigger can fire based on an update to an image in a different OpenShift cluster.
e.g.: If I have a cluster non-prod and prod cluster, can I have a deployment configured in cluster prod with an image change trigger, with the image coming from the cluster non-prod's image registry?
I followed documentation here:
https://dzone.com/articles/pulling-images-from-external-container-registry-to
https://docs.openshift.com/container-platform/4.5/openshift_images/managing_images/using-image-pull-secrets.html
And based on above document ,
I created docker-registry secret in prod Cluster with docker-password = default-token-value from non-prod/project secret. The syntax used:
oc create secret docker-registry non-prod-registry-secret --namespace <<prod-namespace>> --docker-server non-prod-image-registry-external-route --docker-username serviceaccount --docker-password <<base-64-default-token-value>> --docker-email a#b.c
Also link builder, deployer and default SA with the new secret created above.
I also create image-stream in prod cluster like this:
oc import-image my-image-name --from=non-prod-image-registry-external-route/project/nonprodimage:latest --confirm --scheduled=true --dry-run=false -n prod-namespace
The imagestream was created successfully in the prod cluster and was referring to the latest sha:xxx identifier in the prod-namespace.
However when creating a deployment thru oc new-app my-image-name:latest --name mynewapp on the above imagestream, it generates ImagePullBAckOff. Here is the exact error message:
Failed to pull image "non-prod-image-registry-external-route/non-prod-namespace/nonprodimage:shaxxx": rpc error: code = Unknown desc = error pinging docker registry non-prod-image-registry-external-route: Get https://non-prod-image-registry-external-route/v2/: x509: certificate signed by unknown authority
I have this setup working following a similar process. Since our organization requires periodic password resets, creating a docker-registry secret based on my credentials was not a good solution.
Instead, we created a dedicated service account in the non-prod environment, pulled down the associated docker config and created an "image promotion" secret based on it in stage and prod environments.
Only comment I had based on your post and the error message:
x509: certificate signed by unknown authority
is to use the insecure sub-command option:
--insecure=false: If true, allow importing from registries that have invalid HTTPS certificates or are hosted via HTTP. This flag will take precedence over the insecure annotation.

error: failed to discover supported resources kubernetes google cloud platform

I was performing a practical where i was deploying a containerised sample application using kubernetes.
i was trying to run container on google cloud platform using kubernetes engine.But while deploying container using "kubectl run" command using google cloud shell.
its showing an error "error: failed to discover supported resources: Get https://35.240.145.231/apis/extensions/v1beta1: x509: certificate signed by unknown authority".
From Error, i can recollect that its because of "SSL Certificate" not authorised.
I even exported the config file resides at "$HOME/.kube/config". but still getting the same error.
please anyone help me understand the real issue behind this.
Best,
Swapnil Pawar
You may try following steps,
List all the available clusters,
$ gcloud container clusters list
Depending upon how you have configured the cluster, if the cluster location is configured for a specific zone then,
$ gcloud container clusters get-credentials <cluster_name> --ZONE <location>
or if the location is configured for a region then,
$ gcloud container clusters get-credentials <cluster_name> --REGION <location>
The above command will update your kubectl config file $HOME/.kube/config
Now, the tricky part.
If you have more than one cluster that you have configured, then your $HOME/.kube/config will have two or more entries. You can verify it by doing a cat command on the config file.
To select a particular context/cluster, you need to run the following commands
$ kubectl config get-contexts -o=name // will give you a list of available contexts
$ kubectl config use-context <CONTEXT_NAME>
$ kubectl config set-context <CONTEXT_NAME>
Now, you may run the kubectl run.

K8S dashboard not accessible after first cluster in GKE - GCP using console

Newbie setup :
Created First project in GCP
Created cluster with default, 3 nodes. Node version 1.7.6. cluster master version 1.7.6-gke.1.
Deployed aan application in a pod, per example.
Able to access "hello world" and the hostname, using the external-ip and the port.
In GCP / GKE webpage of my cloud console, clicked "discovery and loadbalancing", I was able to see the "kubernetes-dashboard" process in green-tick, but cannot access throught the IP listed. tried 8001,9090, /ui and nothing worked.
not using any cloud shell or gcloud commands on my local laptop. Everything is done on console.
Questions :
How can anyone access the kubernetes-dashboard of the cluster created in console?
docs are unclear, are the dashboard components incorporated in the console itself? Are the docs out of sync with GCP-GKE screens?
tutorial says run "kubectl proxy" and then to open
"http://localhost:8001/ui", but it doesnt work, why?
If you create a cluster with with version 1.9.x or greater, then u can access using tokens.
get secret.
kubectl -n kube-system describe secrets `kubectl -n kube-system get secrets | awk '/clusterrole-aggregation-controller/ {print $1}'` | awk '/token:/ {print $2}'
Copy secret.
kubectl proxy.
Open UI using 127.0.0.1:8001/ui. This will redirect to login page.
there will be two options to login, kubeconfig and token.
Select token and paste the secret copied earlier.
hope this helps
It seems to be an issue with the internal Kubernetes DNS service starting at version 1.7.6 on Google Cloud.
The solution is to access the dashboard at this endpoint instead:
http://localhost:8001/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard
Github Issue links:
https://github.com/kubernetes/dashboard/issues/2368
https://github.com/kubernetes/kubernetes/issues/52729
The address of the dashboard service is only accessible from inside of the cluster. If you ssh into a node in your cluster, you should be able to connect to the dashboard. You can verify this by noticing that the address is within the services CIDR range for your cluster.
The dashboard in running as a pod inside of your cluster with an associated service. If you open the Workloads view you will see the kubernetes-dashboard deployment and can see the pod that was created by the deployment. I'm not sure which docs you are referring to, since you didn't provide a link.
When you run kubectl proxy it creates a secure connection from your local machine into your cluster. It works by connecting to your master and then running through a proxy on the master to the pod/service/host that you are connecting to via an ssh tunnel. It's possible that it isn't working because the ssh tunnels are not running; you should verify that your project has newly created ssh rules allowing access from the cluster endpoint IP address. Otherwise, if you could explain more about how it fails, that would be useful for debugging.
First :
gcloud container clusters get-credentials cluster-1 --zone my-zone --project my-project
Then find your kubernetes dashboard endpoint doing :
kubectl cluster-info
It will be something like https://42.42.42.42/api/v1/namespaces/kube-system/services/kubernetes-dashboard/proxy
Install kube-dashboard
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml
Run:
$ kubectl proxy
Access:
http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/login