I'm configuring CI/CD in OpenShift: Dev > Stage > Prod and I'm facing some issues in Stage to reach Dev ImageStream. The whole setup looks like this:
Dev - runs Tekton pipeline and on the last task triggers BuildConfig > Build outputs new image to ImageStream > ImageStream new tag triggers DeploymentConfig > Deployment happens
Stage - I'd like to reach tag in ImageStream in Dev so I could build and deploy application in Stage.
I'm using OpenShift internal registry image-registry.openshift-image-registry.svc:port
In Stage what I've done is one Task in Pipeline to execute image-pull command:
oc import-image image-registry.openshift-image-registry.svc:port/namespace/name:version --confirm
but I get the following error:
Error from server (Forbidden): imagestreams.image.openshift.io "name" is forbidden:
User "system:serviceaccount:namespace:sa" cannot get resource "imagestreams" in API group "image.openshift.io" in the namespace "namespace"
I've a serviceAccount sa in Dev and Stage the same which only has github-secret.
According to some examples like OpenShift documentation Cluster-role bindings:
$ oc adm policy add-cluster-role-to-user <role> <username>
Binds a given role to specified users for all projects in the cluster.
This meaning in same cluster boundaries.
and stackoverflow previous post:
oc policy add-role-to-user \
system:image-puller system:serviceaccount:testing2:default \
--namespace=testing1
Your project testing2 will be able to access images from project testing1 in your openshift.
This meaning between projects (good) but in the same cluster (I need different cluster)
is there a way to set a role binding to be able to reach ImageStream from a different cluster? Or a cluster role? Or is it other way to achieve this?
Any help is appreciated
You need a service account with system:image-puller role in the namespace you have your image stream then get the token from this service account and use this token as a pull secret from your other cluster.
I would recommend making a mirror ImageStream in your pulling cluster to manage the link.
CLUSTER_TARGET=cluster-b
CLUSTER_PULLING=cluster-a
C_B_NAMESPACE=Y
C_B_SERVICEACCOUNT_FOR_PULL=${CLUSTER_PULLING}-sa
C_B_REGISTRY=image-registry.cluster-b.com:5000
IMAGE_ID=image:tag
# in oc command for Cluster B
oc create sa $C_B_SERVICEACCOUNT_FOR_PULL -n $C_B_NAMESPACE
oc policy add-role-to-user system:image-puller system:serviceaccount:$C_B_SERVICEACCOUNT_FOR_PULL -n $C_B_NAMESPACE
SA_TOKEN=$(oc sa get-token $C_B_SERVICEACCOUNT_FOR_PULL -n $C_B_NAMESPACE)
# in oc command for Cluster A
C_A_NAMESPACE=X
SECRET="{\"auths\":{\"$C_B_REGISTRY\":{\"auth\":\"$(base64 $SA_TOKEN)\",\"email\":\"you#example.com\"}}"
oc create secret generic ${CLUSTER_TARGET}-pullsecret \
--from-literal=.dockerconfigjson=$SECRET \
--type=kubernetes.io/dockerconfigjson -n $C_A_NAMESPACE
oc secrets link default ${CLUSTER_TARGET}-pullsecret --for=pull -n $C_A_NAMESPACE
oc tag $C_B_REGISTRY/${C_B_NAMESPACE}/${IMAGE_ID} ${C_A_NAMESPACE}/${IMAGE_ID} --scheduled -n $C_A_NAMESPACE
# now you have a scheduled pull between A and B to your local ImageStream.
#If you want to use from another namespace in Cluster A:
oc create namespace Z
oc policy add-role-to-user system:image-puller system:serviceaccount:Z:default -n $C_A_NAMESPACE
echo "now pods in Z can reference image-registry.openshift-image-registry.svc/${C_A_NAMESPACE}/${IMAGE_ID}"
Checkout the pull secrets here
For Tekton, I'm not sure but basically you need:
pull secret to pull from external repo
service account with image-puller to pull locally (hense the local mirroring image stream to make your life easier)
Related
I've set up a basic GKE cluster using Autopilot settings. I am able to install Helm charts on it using kubectl with proper kubeconfig pointing to the GKE cluster.
I'd like to do the same without the kubeconfig, by providing the cluster details with relevant parameters.
To do that I'm running a docker container using alpine/helm image and passing the paramtrised command which looks like this:
docker run --rm -v $(pwd):/chart alpine/helm install <my_chart_name> /chart --kube-apiserver <cluster_endpoint> --kube-ca-file /chart/<cluster_certificate_file> --kube-as-user <my_gke_cluster_username> --kube-token <token>
unfortunately it returns :
Error: INSTALLATION FAILED: Kubernetes cluster unreachable: Get "http://<cluster_endpoint>/version": dial tcp <cluster_endpoint>:80: i/o timeout
Is this even doable with GKE?
One challenge will be that GKE leverages a plugin (currently built in to kubectl itself but soon the standlone gke-gcloud-auth-plugin) to obtain an access token for the default gcloud user.
This token expires hourly.
If you can, it would be better to mount the kubeconfig (${HOME}/.kube/config) file into the container as it should (!) then authenticate as if it were kubectl which will not only leverage the access token correctly but will renew it as appropriate.
https://github.com/alpine-docker/helm
docker run \
--interactive --tty --rm \
--volume=${PWD}/.kube:/root/.kube \
--volume=${PWD}/.helm:/root/.helm \
--volume=${PWD}/.config/helm:/root/.config/helm \
--volume=${PWD}/.cache/helm:/root/.cache/helm \
alpine/helm ...
NOTE It appears there are several (.helm, .config and .cache) other local paths that may be required too.
Problem solved! A more experienced colleague has found the solution.
I should have used the address including "http://" protocol specification. That however still kept returning "Kubernetes cluster unreachable: " error, with "unknown" details instead.
I had been using incorect username. Instead the one from kubeconfig file, a new service account should be created and its name used instead in a form system:serviceaccount:<namespace>:<service_account>. However that would not alter the error either.
The service account lacked proper role, following command did the job: kubectl create rolebinding --clusterrole=cluster-admin --serviceaccount=<namespace>:<service_account>. Ofc, cluster-admin might now be the role we want to give away freely.
I'm trying to work out that if an image change trigger can fire based on an update to an image in a different OpenShift cluster.
e.g.: If I have a cluster non-prod and prod cluster, can I have a deployment configured in cluster prod with an image change trigger, with the image coming from the cluster non-prod's image registry?
I followed documentation here:
https://dzone.com/articles/pulling-images-from-external-container-registry-to
https://docs.openshift.com/container-platform/4.5/openshift_images/managing_images/using-image-pull-secrets.html
And based on above document ,
I created docker-registry secret in prod Cluster with docker-password = default-token-value from non-prod/project secret. The syntax used:
oc create secret docker-registry non-prod-registry-secret --namespace <<prod-namespace>> --docker-server non-prod-image-registry-external-route --docker-username serviceaccount --docker-password <<base-64-default-token-value>> --docker-email a#b.c
Also link builder, deployer and default SA with the new secret created above.
I also create image-stream in prod cluster like this:
oc import-image my-image-name --from=non-prod-image-registry-external-route/project/nonprodimage:latest --confirm --scheduled=true --dry-run=false -n prod-namespace
The imagestream was created successfully in the prod cluster and was referring to the latest sha:xxx identifier in the prod-namespace.
However when creating a deployment thru oc new-app my-image-name:latest --name mynewapp on the above imagestream, it generates ImagePullBAckOff. Here is the exact error message:
Failed to pull image "non-prod-image-registry-external-route/non-prod-namespace/nonprodimage:shaxxx": rpc error: code = Unknown desc = error pinging docker registry non-prod-image-registry-external-route: Get https://non-prod-image-registry-external-route/v2/: x509: certificate signed by unknown authority
I have this setup working following a similar process. Since our organization requires periodic password resets, creating a docker-registry secret based on my credentials was not a good solution.
Instead, we created a dedicated service account in the non-prod environment, pulled down the associated docker config and created an "image promotion" secret based on it in stage and prod environments.
Only comment I had based on your post and the error message:
x509: certificate signed by unknown authority
is to use the insecure sub-command option:
--insecure=false: If true, allow importing from registries that have invalid HTTPS certificates or are hosted via HTTP. This flag will take precedence over the insecure annotation.
Trying to access Kubernetes dashboard (Azure AKS) by using below command but getting error as attached.
az aks browse --resource-group rg-name --name aks-cluster-name --listen-port 8851
Please read AKS documentation of how to authenticate the dashboard from link. This also explains about how to enable the addon for newer version of k8s also.
Pasting here for reference
Use a kubeconfig
For both Azure AD enabled and non-Azure AD enabled clusters, a kubeconfig can be passed in. Ensure access tokens are valid, if your tokens are expired you can refresh tokens via kubectl.
Set the admin kubeconfig with az aks get-credentials -a --resource-group <RG_NAME> --name <CLUSTER_NAME>
Select Kubeconfig and click Choose kubeconfig file to open file selector
Select your kubeconfig file (defaults to $HOME/.kube/config)
Click Sign In
Use a token
For non-Azure AD enabled cluster, run kubectl config view and copy the token associated with the user account of your cluster.
Paste into the token option at sign in.
Click Sign In
For Azure AD enabled clusters, retrieve your AAD token with the following command. Validate you've replaced the resource group and cluster name in the command.
kubectl config view -o jsonpath='{.users[?(#.name == "clusterUser_<RESOURCE GROUP>_<AKS_NAME>")].user.auth-provider.config.access-token}'
Try to run this
kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}').
You will get many values for some other keys such as Name, Labels, ..., token . The important one is the token that related to your name. Then copy that token and paste it.
As we know by default when we create a Strimzi-Kafka user, user gets its own user.crt & user.key created in Kubernetes secrets-manager but I want to use my own user.crt & user.key, is it feasible?
Rather than creating the user first then replacing with our own keys! Do we have option to pass our own crt, keys in runtime user create? Can we specify somehow in the deployment file?
From official doc: I got this https://strimzi.io/docs/master/#installing-your-own-ca-certificates-str but it's for kind:Kafka not for kind:KafkaUser as we know kind:KafkaUser is used for user creation.
Am answering my question myself!
STEP1:
kubectl -n <namespace> create secret generic <ca-cert-secret> --from-file=ca.crt=<ca-cert-file>
Eg:
kubectl -n kafka create secret generic custom-strimzi-user --from-file=ca.crt=ca-decoded.crt --from-file=user.crt=user-decoded.crt --from-file=user.key=user-decoded.key -o yaml
STEP2:
kubectl -n <namespace> label secret <ca-cert-secret> strimzi.io/kind=<Kafka or KafkaUser> strimzi.io/cluster=<my-cluster>
Eg:
kubectl -n kafka label secret custom-strimzi-user strimzi.io/kind=KafkaUser strimzi.io/cluster=kafka
STEP3: Now to Enable ACL & TLS for above created user:
Apply Strimzi officially provided create user yaml deployment file (kind:KafkaUser) format after replacing the user name with one created from above, then execute :
kubectl apply -f kafka-create-user.yml
Note: Here if we run kubectl apply -f kafka-create-user.yml before creating custom user as in STEP1 & STEP2 then Strimzi create a user with its own user.crt & user.key
FYI above what I shared is for user custom crt & user custom key but for operator cluster CA (crt & key) we have official doc here: https://strimzi.io/docs/master/#installing-your-own-ca-certificates-str
Regards,
Sudhir Tataraju
I deploy apps to Kubernetes running on Google Cloud from CI. CI makes use of kubectl config which contains auth information (either in directly CVS or templated from the env vars during build)
CI has seperate Google Cloud service account and I generate kubectl config via
gcloud auth activate-service-account --key-file=key-file.json
and
gcloud container clusters get-credentials <cluster-name>
This sets the kubectl config but the token expires in few hours.
What are my options of having 'permanent' kubectl config other than providing CI with key file during the build and running gcloud container clusters get-credentials ?
You should look into RBAC (role based access control) which will authenticate the role avoiding expiration in contrast to certificates which currently expires as mentioned.
For those asking the same question and upvoting.
This is my current sollution:
For some time I treated key-file.json as an identity token, put it to the CI config and used it within container with gcloud CLI installed. I used the key file/token to log in to GCP and let gcloud generate kubectl config - the same approach used for GCP container registry login.
This works fine but using kubectl in CI is kind of antipattern. I switched to deploying based on container registry push events. This is relatively easy to do in k8s with keel flux, etc. So CI has only to push Docker image to the repo and its job ends there. The rest is taken care of within k8s itself so there is no need for kubectl and it's config in the CI jobs.