CloudSQL proxy credentials: invalid Json file - kubernetes

Am trying to use CloudSQL proxy with my container to Cloud SQL storage in GCP Kubernetes. As soon as I deploy my yaml I get the error “CrashLoopBackOff” for the pod and the “kubectl logs cloudsql-proxy” gives me the error that the credentials.json is missing. I do have it in the home dir of my cloudtop from where I am accessing the Kubernetes cluster. I am not actually using any volumes for persistent data but should I still use the “volumes:” definition as mentioned in step 6.4 in https://cloud.google.com/sql/docs/mysql/connect-kubernetes-engine
What is the mistake am making please?
Thanks much

Related

Where are the setup files or installed files on Kubernetes. Where are these installed on Linux or Google Cloud?

I have used Kubernetes and I deployed for example WordPress or nginx or etc. We install from yaml file. Where is it installed how can i find directory of pages(for example WordPress pages etc.) at same point at Google Cloud too. When I use Kubernetes at Google Cloud where is the path of installed files(ex. index.php).
If you are running the docker image directly without attaching anything like NFS, S3 or Disk then you will be able to get those files by default in the container file system(index.php and all).
With any K8s cluster you check files inside container either Gcloud or any :
kubectl get pods
kubectl exec -it <Wordpress pod name> -- /bin/bash
If you are attaching the File system like NFS, or object storage S3 or EFS you will be able to watch those files there unless you mount and apply config using the YAML file.
Regarding setup file (YAML),
Kubernetes uses the ETCD database as a data store. The flow is like this. Kubectl command connect to API server and sends the YAML file to API server. API parses and store the information in ETCD database so you wont be getting those file as it is in YAML format.

Creating a Kubernetes Service with Pulumi up results in error Could not create watcher for Endpoint objects associated with Service

I'm trying to use Pulumi to create a Deployment with a linked Service in a Kubesail cluster. The Deployment is created fine but when Pulumi tries to create the Service an error is returned:
kubernetes:core:Service (service):
error: Plan apply failed: resource service was not successfully created by the Kubernetes API server : Could not create watcher for Endpoint objects associated with Service "service": unknown
The Service is correctly created in Kubesail and the error seems to be glaringly obvious that it can't do Pulumi's neat monitoring but the unknown error isn't so neat!
What might be being denied on the Kubernetes cluster such that Pulumi can't do the monitoring that would be different between a Deployment and a Service? Is there a way to skip the watching that I missed in the docs to get me past this?
I dug a little into the Pulumi source code and found the resource kinds it uses to track and used kubectl auth can-i and low and behold watching an endpoint is currently denied but watching replicaSets and the service themselves is not.

Spin-front50 pod is crashing while deploying Spinnaker on Kubernetes with Minio as storage

I am trying to deploy Spinnaker in Kubernetes with Minio as storage which is also running in Kubernetes. Now, spin-front50 pod does not start and is crashing. Looking at the pod logs, it is failing with
Caused by: java.net.UnknownHostException: spin-37f4958d-f5e4-4515-9894-25da8fcc7f66.minio-vocal-waterbuffalo.default
It seems that the code is adding the bucket name to the minio hostname and that is not being resolved in Kubernetes.
How can I make this work?
S3 storage can be accessed using the bucket name either as a domain or as a path. This can be controlled in halyard and set it up to access S3 as a path.
hal config storage s3 edit --path-style-access=true
Run this before deploying spinnaker using halyard. Then halyard will use minio-vocal-waterbuffalo.default as the host name.
This is also covered in Spinnaker issue 4431
For full disclosure, I work for OpsMx that provides commercial support for Spinnaker.

Deploying Spinnaker to Openshift fails at spin-redis-bootstrap stage

I'm trying to deploy Spinnaker into an Openshift cluster(v3.10) using Halyard. Everything seems to deploy OK up until the deployment of spin-redis-bootstrap. The hal deploy apply command eventually times out, with the following error in the spin-redis-bootstrap pod logs:
Redis master data doesn't exist, data won't be persistent!
mkdir: cannot create directory '/redis-master-data': Permission denied
[7] 01 Oct 17:21:04.443 # Can't chdir to '/redis-master-data': No such file or directory
Seems like a permissions issue. This error does not occur when deploying directly to Kubernetes(v1.10).
Does halyard use a specific service account to deploy the Spinnaker services, that I would need to grant additional permissions to?
Any help would be appreciated.
I was able to spin Redis for Spinnaker by changing Docker image to registry.access.redhat.com/rhscl/redis-32-rhel7 in deployment config.
The reason it was failing due to more strictly permissions in OpenShift.

Terraform Kubernetes provider with EKS fails on configmap

I've followed the instructions to create an EKS cluster in AWS using Terraform.
https://www.terraform.io/docs/providers/aws/guides/eks-getting-started.html
I've also copied the output for connecting to the cluster to ~/.kube/config-eks. I've verified this successfully works as I've been able to connect to the cluster and manually deploy containers. However, now i'm trying to use the Terraform Kubernetes provider to connect to the cluster but cannot seem to be able to configure the provider properly.
I've configured the provider to use my kubectl configuration but when attempting to push a simple configmap, i get an error stating the following:
configmaps is forbidden: User "system:anonymous" cannot create configmaps in the namespace "kube-system"
I know that the provider is picking up part of the configuration but I cannot seem to get it to authenticate. I suspect this is because EKS uses heptio for authentication and i'm not sure if the K8s Go client used by Terraform can support heptio. However, given that Terraform released their AWS EKS support when EKS went GA, I'd doubt that they wouldn't also update their Terraform provider to work with it.
Is it possible to even do this now? Are there alternatives?
Exec auth was added here: https://github.com/kubernetes/client-go/commit/19c591bac28a94ca793a2f18a0cf0f2e800fad04
This is what is utilized for custom authentication plugins and was published Feb 7th.
Right now, Terraform doesn't support the new exec-based authentication provider, but there is an issue open with a workaround: https://github.com/terraform-providers/terraform-provider-kubernetes/issues/161
That said, if I get some free time I will work on a PR.