Accessing a shared directory inside GKE from an external world - kubernetes

I am newbie to Google Cloud and GKE and i am trying to setup NFS Persistent Volumes with Kubernetes on GKE with the help of following link :
https://medium.com/platformer-blog/nfs-persistent-volumes-with-kubernetes-a-case-study-ce1ed6e2c266
I had followed the instructions and i was able to achieve the desired results as mentioned in the blog but i need to access the shared folder (/uploads) from an external world so can someone help me to achieve it or any pointers or any suggestions to achieve the same

I have followed the doc and implemented the steps on my test GKE cluster like you. Just I have one observation about the current API version for deployment. We need to use apiVersion: apps/v1 instead of apiVersion: extensions/v1beta1. Then I test with a busybox pod to mount the volume and the test was successful.
Then I exposed the service “nfs-server” as service type “Load Balancer” like below
and found the external load balancer endpoints like (LB_Public_Ip):111 in Services & Ingress tab. I allowed ports 111, 2049, 20048 in firewall. After that I took a redhat based VM in the GCP project and installed “sudo dnf install nfs-utils -y”. Then you may use the below command to see the nfs exports list. Then you can mount it as expected.
-sudo showmount -e LB_Public_IP

Please have a look on the below sample configuration and you may follow the GCP doc

Related

How do I use crossplane to Install helm charts (with provider-helm) into other cluster

I'm evaluating crossplane to use as our go to tool to deploy our clients different solutions and have struggled with one issue:
We want to install crossplane to one cluster on GCP (which we create manually) and use that crossplane to provision new cluster on which we can install helm charts and deploy as usual.
The main problem so far is that we haven't figured out how to tell crossplane to install the helm charts into other clusters than itself.
This is what we have tried so for:
The provider-config in the example:
apiVersion: helm.crossplane.io/v1beta1
kind: ProviderConfig
metadata:
name: helm-provider
spec:
credentials:
source: InjectedIdentity
...which works but installs everything into the same cluster as crossplane.
and the other example:
apiVersion: helm.crossplane.io/v1beta1
kind: ProviderConfig
metadata:
name: default
spec:
credentials:
source: Secret
secretRef:
name: cluster-credentials
namespace: crossplane-system
key: kubeconfig
...which required a lot of makefile scripting to easier generate a kubeconfig for the new cluster and with that kubecoinfig still gives a lot of errors (but does begin to create something in the new cluster, but it doesnt work all the way. Gettings errors like: " PodUnschedulable Cannot schedule pods: gvisor}).
I have only tried crossplane for a couple of days so I'm aware that I might be approaching this from a completely wrong angle but I do like the promise of crossplane and its approach compared to Terraform and alike.
So the question is: I'm thinking completely wrong or I'm missing something obvious.
The second test with the kubeconfig feels quite complicated right now (many steps in correct order to achieve it).
Thanks
As you've noticed, ProviderConfig with InjectedIdentity is for the case where provider-helm installs the helm release into the same cluster.
To deploy to other clusters, provider-helm needs a kubeconfig file of the remote cluster which needs to be provided as a Kubernetes secret and referenced from ProviderConfig. So, as long as you've provided a proper kubeconfig to an external cluster that is accessible from your Crossplane cluster (a.k.a. control plane), provider-helm should be able to deploy the release to the remote cluster.
So, it looks like you're on the right track regarding configuring provider-helm, and since you observed something getting deployed to the external cluster, you provided a valid kubeconfig, and provider-helm could access and authenticate to the cluster.
The last error you're getting sounds like some incompatibility between your cluster and release, e.g. the external cluster only allows pods with gvisor and the application that you want to install with provider helm does not have some labels accordingly.
As a troubleshooting step, you might try installing that helm chart with exactly same configuration to the external cluster via helm cli, using the same kubeconfig you built.
Regarding the inconvenience of building the Kubeconfig you mentioned, provider-helm needs a way to access to that external Kubernetes cluster, and since kubeconfig is the most common way for this purpose. However, if you see another alternative that makes things easier for some common use cases, this could be implemented and it would be great if you could create a feature request in the repo for this.
Finally, I am wondering how you're creating those external clusters. If it makes sense to create them with Crossplane as well, e.g. if GKE with provider-gcp, then, you can compose a helm ProviderConfig together with a GKE Cluster resource which would just create the appropriate secret and ProviderConfig when you create a new cluster, you can check this as an example: https://github.com/crossplane-contrib/provider-helm/blob/master/examples/in-composition/composition.yaml#L147

Spin-front50 pod is crashing while deploying Spinnaker on Kubernetes with Minio as storage

I am trying to deploy Spinnaker in Kubernetes with Minio as storage which is also running in Kubernetes. Now, spin-front50 pod does not start and is crashing. Looking at the pod logs, it is failing with
Caused by: java.net.UnknownHostException: spin-37f4958d-f5e4-4515-9894-25da8fcc7f66.minio-vocal-waterbuffalo.default
It seems that the code is adding the bucket name to the minio hostname and that is not being resolved in Kubernetes.
How can I make this work?
S3 storage can be accessed using the bucket name either as a domain or as a path. This can be controlled in halyard and set it up to access S3 as a path.
hal config storage s3 edit --path-style-access=true
Run this before deploying spinnaker using halyard. Then halyard will use minio-vocal-waterbuffalo.default as the host name.
This is also covered in Spinnaker issue 4431
For full disclosure, I work for OpsMx that provides commercial support for Spinnaker.

Nginx Ingress Controller Installation Error, "dial tcp 10.96.0.1:443: i/o timeout"

I'm trying to setup a kubernetes cluster with kubeadm and vagrant. I faced an error during installing nginx ingress controller was timeout when the pods is trying to retrieve the configmap through kubernetes API. I have looked around and trying to apply their solution, still no luck, this is the reason I come out with this post.
Environment:
I'm using vagrant to setup 2 nodes with ubuntu/xenial image.
kmaster
-------------------------------------------
network:
Adapter1: NAT
Adapter2: HostOnly-network, IP:192.168.2.71
kworker1
-------------------------------------------
network:
Adapter1: NAT
Adapter2: HostOnly-network, IP:192.168.2.72
I followed the kubeadm to setup the cluster
[Setup kubernetes with kubeadm]
And my kube cluster init command as below:
kubeadm init --pod-network-cidr=192.168.0.0/16 --apiserver-advertise-address=192.168.2.71
and apply calico network plugin policy:
kubectl apply -f \
https://docs.projectcalico.org/v3.4/getting-started/kubernetes/installation/hosted/etcd.yaml
kubectl apply -f \
https://docs.projectcalico.org/v3.4/getting-started/kubernetes/installation/hosted/calico.yaml
(Calico is a plugin I currently successful installed with, I will come out another post for flannel plugin which the plugin unable to access the service)
I'm using helm to install ingress controller followed the tutorial
https://kubernetes.github.io/ingress-nginx/deploy/
That's the error occurred once I applied helm deploy command when I describe the pod
Appreciate someone can help, as I know the reason was the pod unable to access kubernetes API. But not this already should enable by kubernetes by default?
My kubesystem pods status as below:
Another solution provided from kubernetes official website:
1) install kube-proxy with sidecar, I still new with kubernetes and I'm looking for example how to install kube-proxy with sidecar. Appreciate if someone could provide an example.
2) use client-go, I'm very confuse when I read this post, it seems that using go command to pull the go script, and I have no clue how's it working with kubernetes pods.
You guys are right, I have tested with digital ocean's droplet and it works as expected, I hit another error is "forbidden, user service account not permitted". Look like the pods is able to access the kubernetes api already. I also tested install istio which I encountered the same issue before, and now it worked in digital ocean droplet.
Thank you guys.

How to access the Kubernetes API in Go and run kubectl commands

I want to access my Kubernetes cluster API in Go to run kubectl command to get available namespaces in my k8s cluster which is running on google cloud.
My sole purpose is to get namespaces available in my cluster by running kubectl command: kindly let me know if there is any alternative.
You can start with kubernetes/client-go, the Go client for Kubernetes, made for talking to a kubernetes cluster. (not through kubectl though: directly through the Kubernetes API)
It includes a NamespaceLister, which helps list Namespaces.
See "Building stuff with the Kubernetes API — Using Go" from Vladimir Vivien
Michael Hausenblas (Developer Advocate at Red Hat) proposes in the comments documentations with using-client-go.cloudnative.sh
A versioned collection of snippets showing how to use client-go.

Can't run Kubernetes dashboard after installing Kubernetes cluster on rancher/server

Docker: 1.12.6
rancher/server: 1.5.10
rancher/agent: 1.2.2
Tried two ways to install Kubernetes cluster on rancher/server.
Method 1: Use Kubernetes environment
Infrastructure/Hosts
Agent hosts disconnected sometimes.
Stacks
All green except kubernetes-ingress-lbs. It has 0 containers.
Method 2: Use Default environment
Infrastructure/Hosts
Set some labels to rancher server and agent hosts.
Stacks
All green except kubernetes-ingress-lbs. It has 0 containers.
Both of them have this issue: kubernetes-ingress-lbs 0 services 0 containers. Then can't access Kubernetes dashboard.
Why didn't been installed by rancher?
And, is it necessary to add those labels for Kubernetes cluster?
Here is RIGHT Kubernetes Cluster deployed on Rancher server:
Turning on the Show System, you can find the service of kubernetes-dashboard under the namespace of kube-system.
Well, by using the version of kubernetes is v1.5.4, you should prepare in advance to pull the below Docker Images:
By reading rancher/catalog and rancher/kuberetes-package, you can know and even modify the config files(like docker-compose.yml, rancher-compose.yml and so on) by yourself.
When you enable to "Show System" containers in the UI, you should be able to see the dashboard container running under Namespace: kube-system. If this container is not running then the dashboard will not be able to load.
You might have to enable kubernetes add-on service within rancher environment template.
manage environments >> edit kubernetes default template >> enable add-on service and save the new template with the preferred name.
Now launch the cluster using customized templates.