Promote image across openshift clusters - kubernetes

I'm trying to work out that if an image change trigger can fire based on an update to an image in a different OpenShift cluster.
e.g.: If I have a cluster non-prod and prod cluster, can I have a deployment configured in cluster prod with an image change trigger, with the image coming from the cluster non-prod's image registry?
I followed documentation here:
https://dzone.com/articles/pulling-images-from-external-container-registry-to
https://docs.openshift.com/container-platform/4.5/openshift_images/managing_images/using-image-pull-secrets.html
And based on above document ,
I created docker-registry secret in prod Cluster with docker-password = default-token-value from non-prod/project secret. The syntax used:
oc create secret docker-registry non-prod-registry-secret --namespace <<prod-namespace>> --docker-server non-prod-image-registry-external-route --docker-username serviceaccount --docker-password <<base-64-default-token-value>> --docker-email a#b.c
Also link builder, deployer and default SA with the new secret created above.
I also create image-stream in prod cluster like this:
oc import-image my-image-name --from=non-prod-image-registry-external-route/project/nonprodimage:latest --confirm --scheduled=true --dry-run=false -n prod-namespace
The imagestream was created successfully in the prod cluster and was referring to the latest sha:xxx identifier in the prod-namespace.
However when creating a deployment thru oc new-app my-image-name:latest --name mynewapp on the above imagestream, it generates ImagePullBAckOff. Here is the exact error message:
Failed to pull image "non-prod-image-registry-external-route/non-prod-namespace/nonprodimage:shaxxx": rpc error: code = Unknown desc = error pinging docker registry non-prod-image-registry-external-route: Get https://non-prod-image-registry-external-route/v2/: x509: certificate signed by unknown authority

I have this setup working following a similar process. Since our organization requires periodic password resets, creating a docker-registry secret based on my credentials was not a good solution.
Instead, we created a dedicated service account in the non-prod environment, pulled down the associated docker config and created an "image promotion" secret based on it in stage and prod environments.
Only comment I had based on your post and the error message:
x509: certificate signed by unknown authority
is to use the insecure sub-command option:
--insecure=false: If true, allow importing from registries that have invalid HTTPS certificates or are hosted via HTTP. This flag will take precedence over the insecure annotation.

Related

DO Kubernetes Cluster + GCP Container Registry

I have a Kubernetes cluster in Digital Ocean, I want to pull the images from a private repository in GCP.
I tried to create a secret that make me able to to pull the images following this article https://blog.container-solutions.com/using-google-container-registry-with-kubernetes
Basically, these are the steps
In the GCP account, create a service account key, with a JSON credential
Execute
kubectl create secret docker-registry gcr-json-key \
--docker-server=gcr.io \
--docker-username=_json_key \
--docker-password="$(cat ~/json-key-file.json)" \
--docker-email=any#valid.email
In the deployment yaml reference the secret
imagePullSecrets:
- name: gcr-json-key
I don't understand why I am getting 403. If there are some restriccions to use the registry outside google cloud, or if I missed some configuration something.
Failed to pull image "gcr.io/myapp/backendnodeapi:latest": rpc error: code = Unknown desc = failed to pull and unpack image "gcr.io/myapp/backendnodeapi:latest": failed to resolve reference "gcr.io/myapp/backendnodeapi:latest": unexpected status code [manifests latest]: 403 Forbidden
Verify that you have enabled the Container Registry API, Installed Cloud SDK and Service account you are using for authentication has permissions to access Container Registry.
Docker requires privileged access to interact with registries. On Linux or Windows, add the user that you use to run Docker commands to the Docker security group.
This documentation has details on prerequisites for container registry.
Note:
Ensure that the version of kubectl is the latest version.
I tried replicating by following the document you provided and it worked at my end, So ensure that all the prerequisites are met.
That JSON string is not a password.
The documentation suggests to either activate the service account:
gcloud auth activate-service-account [USERNAME]#[PROJECT-ID].iam.gserviceaccount.com --key-file=~/service-account.json
Or add the configuration to $HOME/.docker/config.json
And then run docker-credential-gcr configure-docker.
Kubernetes seems to demand a service-account token secret
and this requires annotation kubernetes.io/service-account.name.
Also see Configure Service Accounts for Pods.

Generating a kubeconfig file and authenticating for google cloud

I have a Kubernetes cluster. Inside my cluster is a Django application which needs to connect to my Kubernetes cluster on GKE. Upon my Django start up (inside my Dockerfile), I authenticate with Google Cloud by using:
gcloud auth activate-service-account $GKE_SERVICE_ACCOUNT_NAME --key-file=$GOOGLE_APPLICATION_CREDENTIALS
gcloud config set project $GKE_PROJECT_NAME
gcloud container clusters get-credentials $GKE_CLUSTER_NAME --zone $GKE_ZONE
I am not really sure if I need to do this everytime my Django container starts, and I am not sure I understand how authentication to Google Cloud works. Could I perhaps just generate my Kubeconfig file, store it somewhere safe and use it all the time instead of authenticating?
In other words, is a Kubeconfig file enough to connect to my GKE cluster?
If your service is running in a Pod inside the GKE cluster you want to connect to, use a Kubernetes service account to authenticate.
Create a Kubernetes service account and attach it to your Pod. If your Pod already has a Kubernetes service account, you may skip this step.
Use Kubernetes RBAC to grant the Kubernetes service account the correct permissions.
The following example grants edit permissions in the prod namespace:
kubectl create rolebinding yourserviceaccount \
--clusterrole=edit \
--serviceaccount=yournamespace:yourserviceaccount\
--namespace=prod
At runtime, when your service invokes kubectl, it automatically receives the credentials you configured.
You can also store the credentials as a secret and mount it on your pod so that it can read them from there
To use a Secret with your workloads, you can specify environment variables that reference the Secret's values, or mount a volume containing the Secret.
You can create a Secret using the command-line or a YAML file.
Here is an example using Command-line
kubectl create secret SECRET_TYPE SECRET_NAME DATA
SECRET_TYPE: the Secret type, which can be one of the following:
generic:Create a Secret from a local file, directory, or literal value.
docker-registry:Create a dockercfg Secret for use with a Docker registry. Used to authenticate against Docker registries.
tls:Create a TLS secret from the given public/private key pair. The public/private key pair must already exist. The public key certificate must be .PEM encoded and match the given private key.
For most Secrets, you use the generic type.
SECRET_NAME: the name of the Secret you are creating.
DATA: the data to add to the Secret, which can be one of the following:
A path to a directory containing one or more configuration files, indicated using the --from-file or --from-env-file flags.
Key-value pairs, each specified using --from-literal flags.
If you need more information about kubectl create you can check the reference documentation

Terraform dial tcp 192.xx.xx.xx:443: i/o timeout error

I am trying to implement CI / CD using GitLab + Terraform to K8S Cluster and K8S Control Plane (Master node) was setup on CentOS
However, Pipeline job fails with the following error
Error: Failed to get existing workspaces: Get "https://192.xx.xx.xx/api/v1/namespaces/default/secrets?labelSelector=tfstate%3Dtrue": dial tcp 192.xx.xx.xx:443: i/o timeout
From the error mentioned above (default/secrets?labelSelector=tfstate%3Dtrue), I assume the error is related to missing 'terraform secret' on default namespace
Example (Terraform secret taken from my Windows)
PS C:\> kubectl get secret
NAME TYPE DATA AGE
default-token-7mzv6 kubernetes.io/service-account-token 3 27d
tfstate-default-state Opaque 1 15h
However, I am not sure which process would create 'tfsecret' or should we create it manually ?
Kindly let me know if I my understanding is wrong and had I missed anything else
EDIT
The issue mentioned above occurred because existing Gitlab-runner was on a different subnet (eg 172.xx.xx.xx instead of 192.xx.xx.xx)
I was asked to use a different Gitlab-runner which runs on the same subnet and now it throws the following error
Error: Failed to get existing workspaces: Get "https://192.xx.xx.xx:6443/api/v1/namespaces/default/secrets?labelSelector=tfstate%3Dtrue": x509: certificate signed by unknown authority
Now, I am bit confused whether the certificate-issue is between GitLab-Runner and Gitlab-Server or Gitlab-Server and K8S Cluster or something else
You have configured Kubernetes as the remote state backend for your Terraform configuration. The error is, that the backend is trying to query existing secrets to determine what workspaces are configured. The x509: certificate signed by unknown authority indicates, that the KUBECONFIG the remote state backend uses does not match the CA of the API server you're connecting to.
If the runners are K8s pods themselves, make sure you provide a KUBECONFIG that matches your target cluster and that the remote state does not configure itself as in-cluster by reading the service account token every K8s pod has - which in most cases will only work for the cluster the pod is running on.
You don't provide enough information to be more specific. But big picture, you have to configure the state backend, and any provider that connect to K8s. Theoretically, the state backend secrets and the K8s resources do not have to be on the same cluster. Meaning, you may have to have different configuration for state backend and K8s providers.

"permanent" GKE kubectl service account authentication

I deploy apps to Kubernetes running on Google Cloud from CI. CI makes use of kubectl config which contains auth information (either in directly CVS or templated from the env vars during build)
CI has seperate Google Cloud service account and I generate kubectl config via
gcloud auth activate-service-account --key-file=key-file.json
and
gcloud container clusters get-credentials <cluster-name>
This sets the kubectl config but the token expires in few hours.
What are my options of having 'permanent' kubectl config other than providing CI with key file during the build and running gcloud container clusters get-credentials ?
You should look into RBAC (role based access control) which will authenticate the role avoiding expiration in contrast to certificates which currently expires as mentioned.
For those asking the same question and upvoting.
This is my current sollution:
For some time I treated key-file.json as an identity token, put it to the CI config and used it within container with gcloud CLI installed. I used the key file/token to log in to GCP and let gcloud generate kubectl config - the same approach used for GCP container registry login.
This works fine but using kubectl in CI is kind of antipattern. I switched to deploying based on container registry push events. This is relatively easy to do in k8s with keel flux, etc. So CI has only to push Docker image to the repo and its job ends there. The rest is taken care of within k8s itself so there is no need for kubectl and it's config in the CI jobs.

Deploying an app to gke from CI

I use gitlab for my CI, they host it and i have my own runners.
I have a k8s cluster running in gke.
I want to use kubectl apply to deploy new versions of my containers.
This all works from my local machine because it uses my google account.
I tried setting this all up as suggested by k8s and gitlab
1. copy over the ca.crt
2. copy over the token
- echo "$KUBE_CA_PEM" > kube_ca.pem
- kubectl config set-cluster default-cluster --server=$KUBE_URL --certificate-authority="$(pwd)/kube_ca.pem"
- kubectl config set-credentials default-admin --token=$KUBE_TOKEN
- kubectl config set-context default-system --cluster=default-cluster --user=default-admin
- kubectl config use-context default-system
When i do this it fails with x509: certificate signed by unknown authority
I tried going to the google cloud console > cluster > show credentials and instead of the token specify the username and password that it shows me there, this fails with the same error.
Finally i tried using the --insecure-skip-tls-verify=true but then it complains error: You must be logged in to the server (the server has asked for the client to provide credentials)
Any Help would be appreciated.
The cause of this problem was an incorrect server url. The server needs to be the one defined on the cluster information page in the google cloud console. You will find an Endpoing ip address.