How can i use a secret object created from Google cloud JSON file into a service account? I have minikf on the VM and kubeflow installed. I am trying to make a container using Jupyter notebook in minikf Kubernetes cluster. The notebook has access to GCP using PodDefault but the kanico container started from notebook automatically can't access GCP.
The code in jupyter notebook for building the container is as follows:
IMAGE_NAME="mnist_training_kf_pipeline"
TAG="latest" # "v_$(date +%Y%m%d_%H%M%S)"
GCR_IMAGE="gcr.io/{PROJECT_ID}/{IMAGE_NAME}:{TAG}".format(
PROJECT_ID=PROJECT_ID,
IMAGE_NAME=IMAGE_NAME,
TAG=TAG
)
builder = kfp.containers._container_builder.ContainerBuilder(
gcs_staging=GCS_BUCKET + "/kfp_container_build_staging")
image_name = kfp.containers.build_image_from_working_dir(
image_name=GCR_IMAGE,
working_dir='./tmp/components/mnist_training/',
builder=builder
)
I get the error:
Error: error resolving source context: dialing: google: could not find default credentials. See https://developers.google.com/accounts/docs/application-default-credentials for more information.
Usage:
executor [flags]
The pod name starting with Kaniko gets created and fails because it can't access the google cloud storage:
The proof of Jupyter notebook is able to utilize my secret object "user-gcp-sa" is that the above code is preparing files on GCS:
Related
I have used Kubernetes and I deployed for example WordPress or nginx or etc. We install from yaml file. Where is it installed how can i find directory of pages(for example WordPress pages etc.) at same point at Google Cloud too. When I use Kubernetes at Google Cloud where is the path of installed files(ex. index.php).
If you are running the docker image directly without attaching anything like NFS, S3 or Disk then you will be able to get those files by default in the container file system(index.php and all).
With any K8s cluster you check files inside container either Gcloud or any :
kubectl get pods
kubectl exec -it <Wordpress pod name> -- /bin/bash
If you are attaching the File system like NFS, or object storage S3 or EFS you will be able to watch those files there unless you mount and apply config using the YAML file.
Regarding setup file (YAML),
Kubernetes uses the ETCD database as a data store. The flow is like this. Kubectl command connect to API server and sends the YAML file to API server. API parses and store the information in ETCD database so you wont be getting those file as it is in YAML format.
I have a Kubernetes cluster in Digital Ocean, I want to pull the images from a private repository in GCP.
I tried to create a secret that make me able to to pull the images following this article https://blog.container-solutions.com/using-google-container-registry-with-kubernetes
Basically, these are the steps
In the GCP account, create a service account key, with a JSON credential
Execute
kubectl create secret docker-registry gcr-json-key \
--docker-server=gcr.io \
--docker-username=_json_key \
--docker-password="$(cat ~/json-key-file.json)" \
--docker-email=any#valid.email
In the deployment yaml reference the secret
imagePullSecrets:
- name: gcr-json-key
I don't understand why I am getting 403. If there are some restriccions to use the registry outside google cloud, or if I missed some configuration something.
Failed to pull image "gcr.io/myapp/backendnodeapi:latest": rpc error: code = Unknown desc = failed to pull and unpack image "gcr.io/myapp/backendnodeapi:latest": failed to resolve reference "gcr.io/myapp/backendnodeapi:latest": unexpected status code [manifests latest]: 403 Forbidden
Verify that you have enabled the Container Registry API, Installed Cloud SDK and Service account you are using for authentication has permissions to access Container Registry.
Docker requires privileged access to interact with registries. On Linux or Windows, add the user that you use to run Docker commands to the Docker security group.
This documentation has details on prerequisites for container registry.
Note:
Ensure that the version of kubectl is the latest version.
I tried replicating by following the document you provided and it worked at my end, So ensure that all the prerequisites are met.
That JSON string is not a password.
The documentation suggests to either activate the service account:
gcloud auth activate-service-account [USERNAME]#[PROJECT-ID].iam.gserviceaccount.com --key-file=~/service-account.json
Or add the configuration to $HOME/.docker/config.json
And then run docker-credential-gcr configure-docker.
Kubernetes seems to demand a service-account token secret
and this requires annotation kubernetes.io/service-account.name.
Also see Configure Service Accounts for Pods.
This is my very first post here and looking for some advise please.
I am learning Kubernetes and trying to get cloud code extension to deploy Kubernetes manifests on non-GKE cluster. Guestbook app can be deployed using cloud code extension to local K8 cluster(such as MiniKube or Docker-for-Desktop).
I have two other K8 clusters as below and I cannot deploy manifests via cloud code. I am not entirely sure if this is supposed to work or not as I couldn't find any docs or posts on this. Once the GCP free trial is finished, I would want to deploy my test apps on our local onprem K8 clusters via cloud code.
3 node cluster running on CentOS VMs(built using kubeadm)
6 node cluster on GCP running on Ubuntu machines(free trial and built using Hightower way)
Skaffold is installed locally on MAC and my local $HOME/.kube/config has contexts and users set to access all 3 clusters.
➜
guestbook-1 kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
docker-desktop docker-desktop docker-desktop
* kubernetes-admin#kubernetes kubernetes kubernetes-admin
kubernetes-the-hard-way kubernetes-the-hard-way admin
Error:
Running: skaffold dev -v info --port-forward --rpc-http-port 57337 --filename /Users/testuser/Desktop/Cloud-Code-Builds/guestbook-1/skaffold.yaml -p cloudbuild --default-repo gcr.io/gcptrial-project
starting gRPC server on port 50051
starting gRPC HTTP server on port 57337
Skaffold &{Version:v1.19.0 ConfigVersion:skaffold/v2beta11 GitVersion: GitCommit:63949e28f40deed44c8f3c793b332191f2ef94e4 GitTreeState:dirty BuildDate:2021-01-28T17:29:26Z GoVersion:go1.14.2 Compiler:gc Platform:darwin/amd64}
applying profile: cloudbuild
no values found in profile for field TagPolicy, using original config values
Using kubectl context: kubernetes-admin#kubernetes
Loaded Skaffold defaults from \"/Users/testuser/.skaffold/config\"
Listing files to watch...
- python-guestbook-backend
watching files for artifact "python-guestbook-backend": listing files: unable to evaluate build args: reading dockerfile: open /Users/adminuser/Desktop/Cloud-Code-Builds/src/backend/Dockerfile: no such file or directory
Exited with code 1.
skaffold config file skaffold.yaml not found - check your current working directory, or try running `skaffold init`
I have the docker and skaffold file in the path as shown in the image and have authenticated the google SDK in vscode. Any help please ?!
I was able to get this working in the end. What helped in this particular case was removing skaffold.yaml, then skaffold init, generated new skaffold.yaml. And, Cloud Code was then able deploy pods on both remote clusters. Thanks for all your help.
Need to create a docker image of grafana app which is in my local machine. i need to deploy same image in azure kubernetes. i'm able integrate AAD with my local Grafana . so, need to create a image out of it.
i'm able to deploy ready made image from docker hub and run in kubernetes cluster but unable to integrate with AAD authentication
Problem 1 : Need docker file which will create docker image of local grafana app
problem 2 : how to integrate AAD authentication with already running GRAFANA container in Kubernetes cluster
Problem 3 : we override Defaults.ini values by ENV varaibles. Can we add ENV variables from GRAFANA UI ?
need solution for either of problems.
followed the article (https://github.com/PlagueHO/Workshop-AKS#prerequisite-knowledge)
this article helped me to Create a service, deployment and ConfigMap in kubernetes cluster.
to Integrate AAD just we need to add all clientid, secret, token url, Auth Url etc in Configmap (grafana.ini)
and need to restart the POD in cluster
or
download the code from (https://github.com/grafana/grafana ) and change the AAD values in ~/conf/defaults.ini
useful articles:
https://grafana.com/docs/auth/generic-oauth/
Am trying to use CloudSQL proxy with my container to Cloud SQL storage in GCP Kubernetes. As soon as I deploy my yaml I get the error “CrashLoopBackOff” for the pod and the “kubectl logs cloudsql-proxy” gives me the error that the credentials.json is missing. I do have it in the home dir of my cloudtop from where I am accessing the Kubernetes cluster. I am not actually using any volumes for persistent data but should I still use the “volumes:” definition as mentioned in step 6.4 in https://cloud.google.com/sql/docs/mysql/connect-kubernetes-engine
What is the mistake am making please?
Thanks much