DO Kubernetes Cluster + GCP Container Registry - kubernetes

I have a Kubernetes cluster in Digital Ocean, I want to pull the images from a private repository in GCP.
I tried to create a secret that make me able to to pull the images following this article https://blog.container-solutions.com/using-google-container-registry-with-kubernetes
Basically, these are the steps
In the GCP account, create a service account key, with a JSON credential
Execute
kubectl create secret docker-registry gcr-json-key \
--docker-server=gcr.io \
--docker-username=_json_key \
--docker-password="$(cat ~/json-key-file.json)" \
--docker-email=any#valid.email
In the deployment yaml reference the secret
imagePullSecrets:
- name: gcr-json-key
I don't understand why I am getting 403. If there are some restriccions to use the registry outside google cloud, or if I missed some configuration something.
Failed to pull image "gcr.io/myapp/backendnodeapi:latest": rpc error: code = Unknown desc = failed to pull and unpack image "gcr.io/myapp/backendnodeapi:latest": failed to resolve reference "gcr.io/myapp/backendnodeapi:latest": unexpected status code [manifests latest]: 403 Forbidden

Verify that you have enabled the Container Registry API, Installed Cloud SDK and Service account you are using for authentication has permissions to access Container Registry.
Docker requires privileged access to interact with registries. On Linux or Windows, add the user that you use to run Docker commands to the Docker security group.
This documentation has details on prerequisites for container registry.
Note:
Ensure that the version of kubectl is the latest version.
I tried replicating by following the document you provided and it worked at my end, So ensure that all the prerequisites are met.

That JSON string is not a password.
The documentation suggests to either activate the service account:
gcloud auth activate-service-account [USERNAME]#[PROJECT-ID].iam.gserviceaccount.com --key-file=~/service-account.json
Or add the configuration to $HOME/.docker/config.json
And then run docker-credential-gcr configure-docker.
Kubernetes seems to demand a service-account token secret
and this requires annotation kubernetes.io/service-account.name.
Also see Configure Service Accounts for Pods.

Related

Jenkins cron job to run selenium & k8s

I am working on a project in which I have created a k8s cluster to run selenium grid locally. I want to schedule the tests to run and until now I have tried to create a Jenkins cron job to do so. For that I am using k8s plugin in Jenkins.
However I am not sure about the steps to follow. Where should I be uploading the kube config file? There are a few options here:
Build Environment in Jenkins
Any ideas or suggestions?
Thanks
Typically, you can choose any option, depending on how you want to manage the system, I believe:
secret text or file option will allow you to copy/paste a secret (with a token) in Jenkins which will be used to access the k8s cluster. Token based access works by adding an HTTP header to your requests to the k8s API server as follows: Authorization: Bearer $YOUR_TOKEN. This authenticates you to the server. This is the programmatic way to access the k8s API.
configure kubectl option will allow you to perhaps specify the config file within Jenkins UI where you can set the kubeconfig. This is the imperative/scriptive way of configuring access to the k8s API. The kubeconfig itself contains set of keypair based credentials that are issued to a username and signed by the API server's CA.
Any way would work fine! Hope this helps!
If Jenkins is running in Kubernetes as well, I'd create a service account, create the necessary Role and RoleBinding to only create CronJobs, and attach your service account to your Jenkins deployment or statefulset, then you can use the token of the service account (by default mounted under /var/run/secrets/kubernetes.io/serviceaccount/token) and query your API endpoint to create your CronJobs.
However, if Jenkins is running outside of your Kubernetes cluster, I'd authenticate against your cloud provider in Jenkins using one of the plugins available, using:
Service account (GCP)
Service principal (Azure)
AWS access and secret key or with an instance profile (AWS).
and then would run any of the CLI commands to generate a kubeconfig file:
gcloud container clusters get-credentials
az aks get-credentials
aws eks update-kubeconfig

Cannot install Helm chart when accessing GKE cluster directly

I've set up a basic GKE cluster using Autopilot settings. I am able to install Helm charts on it using kubectl with proper kubeconfig pointing to the GKE cluster.
I'd like to do the same without the kubeconfig, by providing the cluster details with relevant parameters.
To do that I'm running a docker container using alpine/helm image and passing the paramtrised command which looks like this:
docker run --rm -v $(pwd):/chart alpine/helm install <my_chart_name> /chart --kube-apiserver <cluster_endpoint> --kube-ca-file /chart/<cluster_certificate_file> --kube-as-user <my_gke_cluster_username> --kube-token <token>
unfortunately it returns :
Error: INSTALLATION FAILED: Kubernetes cluster unreachable: Get "http://<cluster_endpoint>/version": dial tcp <cluster_endpoint>:80: i/o timeout
Is this even doable with GKE?
One challenge will be that GKE leverages a plugin (currently built in to kubectl itself but soon the standlone gke-gcloud-auth-plugin) to obtain an access token for the default gcloud user.
This token expires hourly.
If you can, it would be better to mount the kubeconfig (${HOME}/.kube/config) file into the container as it should (!) then authenticate as if it were kubectl which will not only leverage the access token correctly but will renew it as appropriate.
https://github.com/alpine-docker/helm
docker run \
--interactive --tty --rm \
--volume=${PWD}/.kube:/root/.kube \
--volume=${PWD}/.helm:/root/.helm \
--volume=${PWD}/.config/helm:/root/.config/helm \
--volume=${PWD}/.cache/helm:/root/.cache/helm \
alpine/helm ...
NOTE It appears there are several (.helm, .config and .cache) other local paths that may be required too.
Problem solved! A more experienced colleague has found the solution.
I should have used the address including "http://" protocol specification. That however still kept returning "Kubernetes cluster unreachable: " error, with "unknown" details instead.
I had been using incorect username. Instead the one from kubeconfig file, a new service account should be created and its name used instead in a form system:serviceaccount:<namespace>:<service_account>. However that would not alter the error either.
The service account lacked proper role, following command did the job: kubectl create rolebinding --clusterrole=cluster-admin --serviceaccount=<namespace>:<service_account>. Ofc, cluster-admin might now be the role we want to give away freely.

Promote image across openshift clusters

I'm trying to work out that if an image change trigger can fire based on an update to an image in a different OpenShift cluster.
e.g.: If I have a cluster non-prod and prod cluster, can I have a deployment configured in cluster prod with an image change trigger, with the image coming from the cluster non-prod's image registry?
I followed documentation here:
https://dzone.com/articles/pulling-images-from-external-container-registry-to
https://docs.openshift.com/container-platform/4.5/openshift_images/managing_images/using-image-pull-secrets.html
And based on above document ,
I created docker-registry secret in prod Cluster with docker-password = default-token-value from non-prod/project secret. The syntax used:
oc create secret docker-registry non-prod-registry-secret --namespace <<prod-namespace>> --docker-server non-prod-image-registry-external-route --docker-username serviceaccount --docker-password <<base-64-default-token-value>> --docker-email a#b.c
Also link builder, deployer and default SA with the new secret created above.
I also create image-stream in prod cluster like this:
oc import-image my-image-name --from=non-prod-image-registry-external-route/project/nonprodimage:latest --confirm --scheduled=true --dry-run=false -n prod-namespace
The imagestream was created successfully in the prod cluster and was referring to the latest sha:xxx identifier in the prod-namespace.
However when creating a deployment thru oc new-app my-image-name:latest --name mynewapp on the above imagestream, it generates ImagePullBAckOff. Here is the exact error message:
Failed to pull image "non-prod-image-registry-external-route/non-prod-namespace/nonprodimage:shaxxx": rpc error: code = Unknown desc = error pinging docker registry non-prod-image-registry-external-route: Get https://non-prod-image-registry-external-route/v2/: x509: certificate signed by unknown authority
I have this setup working following a similar process. Since our organization requires periodic password resets, creating a docker-registry secret based on my credentials was not a good solution.
Instead, we created a dedicated service account in the non-prod environment, pulled down the associated docker config and created an "image promotion" secret based on it in stage and prod environments.
Only comment I had based on your post and the error message:
x509: certificate signed by unknown authority
is to use the insecure sub-command option:
--insecure=false: If true, allow importing from registries that have invalid HTTPS certificates or are hosted via HTTP. This flag will take precedence over the insecure annotation.

Airflow KubePodOperator pull image from private repository

How can Apache Airflow's KubernetesPodOperator pull docker images from a private repository?
The KubernetesPodOperator has an image_pull_secrets which you can pass a Secrets object to authenticate with the private repository. But the secrets object can only represent an environment variable, or a volume - neither of which fit my understanding of how Kubernetes uses secrets to authenticate with private repos.
Using kubectl you can create the required secret with something like
$ kubectl create secret docker-registry $SECRET_NAME \
--docker-server=https://${ACCOUNT}.dkr.ecr.${REGION}.amazonaws.com \
--docker-username=AWS \
--docker-password="${TOKEN}" \
--docker-email="${EMAIL}"
But how can you create the authentication secret in Airflow?
There is secret object with docker-registry type according to kubernetes documentation which can be used to authenticate to private repository.
As You mentioned in Your question; You can use kubectl to create secret of docker-registry type that you can then try to pass with image_pull_secrets.
However depending on platform You are using this might have limited or no use at all according to kubernetes documentation:
Configuring Nodes to Authenticate to a Private Registry
Note: If you are running on Google Kubernetes Engine, there will already be a .dockercfg on each node with credentials for Google Container Registry. You cannot use this approach.
Note: If you are running on AWS EC2 and are using the EC2 Container Registry (ECR), the kubelet on each node will manage and update the ECR login credentials. You cannot use this approach.
Note: This approach is suitable if you can control node configuration. It will not work reliably on GCE, and any other cloud provider that does automatic node replacement.
Note: Kubernetes as of now only supports the auths and HttpHeaders section of docker config. This means credential helpers (credHelpers or credsStore) are not supported.
Making this work on mentioned platforms is possible but it would require automated scripts and third party tools.
Like in Amazon ECR example: Amazon ECR Docker Credential Helper would be needed to periodically pull AWS credentials to docker registry configuration and then have another script to update kubernetes docker-registry secrets.
As for Airflow itself I don't think it has functionality to create its own docker-repository secrets.
You can request functionality like that in Apache Airflow JIRA.
P.S.
If You still have issues with Your K8s cluster you might want to create new question on stack addressing them.

"permanent" GKE kubectl service account authentication

I deploy apps to Kubernetes running on Google Cloud from CI. CI makes use of kubectl config which contains auth information (either in directly CVS or templated from the env vars during build)
CI has seperate Google Cloud service account and I generate kubectl config via
gcloud auth activate-service-account --key-file=key-file.json
and
gcloud container clusters get-credentials <cluster-name>
This sets the kubectl config but the token expires in few hours.
What are my options of having 'permanent' kubectl config other than providing CI with key file during the build and running gcloud container clusters get-credentials ?
You should look into RBAC (role based access control) which will authenticate the role avoiding expiration in contrast to certificates which currently expires as mentioned.
For those asking the same question and upvoting.
This is my current sollution:
For some time I treated key-file.json as an identity token, put it to the CI config and used it within container with gcloud CLI installed. I used the key file/token to log in to GCP and let gcloud generate kubectl config - the same approach used for GCP container registry login.
This works fine but using kubectl in CI is kind of antipattern. I switched to deploying based on container registry push events. This is relatively easy to do in k8s with keel flux, etc. So CI has only to push Docker image to the repo and its job ends there. The rest is taken care of within k8s itself so there is no need for kubectl and it's config in the CI jobs.