Override deployd Helm-chart values on GKE with values from a file in the local machine? - kubernetes

I would like to change my deployed(GKE) Helm Chart values file with the ones that are inside my local file, basically to do this:
helm upgrade -f new-values.yml {release name} {package name or path}
So I've make all the changes inside my local file, but the deployment is inside the GKE cluster.
I've connected to my cluster via ssh, but how can I run the above command in order to perform the update if the file with the new values is on my local machine and the deployment is inside GKE cluster?
Maybe somehow via the scp command?

Solution by setting up required tools locally (you need a while or two for that)
You just need to reconfigure your kubectl client, which can be done pretty straighforward. When you log in to GCP Console -> go to Kubernetes Engine -> Clusters -> click on Actions (3 vertical dots to the right of the cluster name) -> select Connect -> copy the command, which may resemble the following one:
gcloud container clusters get-credentials my-gke-cluster --zone europe-west4-c --project my-project
It assumes you have your Cloud SDK and kubectl already installed on your local machine. If you have not, here you have step-by-step description how to do that:
Installing Google Cloud SDK [Debian/Ubuntu] (if you use a different OS, simply choose another tab)
Installing kubectl tool [Debian/Ubuntu] (choose your OS if it is something different)
Once you run the above command on your local machine, your kubectl context will be automatically set to your GKE Cluster even if it was set before e.g. to your local Minikube instance. You can check it by running:
kubectl config current-context
OK, almost done. Did I also mention helm ? Well, you will also need it. So if you have not installed it on your local machine previously, please do it now:
Install helm [Debian/Ubuntu]
Alternative slution using Cloud Shell (much quicker)
If installing and configuring it locally seems to you too much hassle, you can simply use a Cloud Shell (I bet you've used it before). In case you didn't, once logged in to your GCP Console click on the following icon:
Once logged into Cloud Shell, you can choose to upload your local files there:
simply click on More (3 dots again):
and choose Upload a file:

Related

Trying to connect to Digital Ocean Kubernates Cluster - .kube/config: not a directory

I'm trying to connect to a Digital Ocean Kubernates cluster using doctl but when I run
doctl kubernetes cluster kubeconfig save <> I get an error saying .kube/config: not a directory. I've authenticated using doctl and when I run doctl account get I see my account info. I'm confused as to what the problem is. Is this some sort of permission issue or did I miss a config step somewhere?
kubectl (by default) stores a configuration in ${HOME}/.kube/config. It appears you don't have the file and the command doesn't create it if it doesn't exist; I recommend you try creating ${HOME}/.kube first as doctl really ought to create the config file if it doesn't exist.
kubectl facilitates interacting with multiple clusters as multiple users in multiple namespaces through the use a tuple called 'context' which combines a cluster with a user with a(n optional) namespace. The command lets you switch between these easily.
After you're done with a cluster, generally (!) you must tidy up its entires in ${HOME}/.kube/config too as these configs tend to grow over time.
You can change the location of the kubectl config file using an environment variable (KUBECONFIG).
See Organizing Cluster Access Using kubeconfig Files

Error when installing Spinnaker on Kubernetes on prem cluster

I'm trying to install Spinnaker on a Kubernetes setup onprem.
Following instructions from https://www.spinnaker.io/setup/
Install and run Halyard as Docker on the Kubernetes master.
Run everything as root
mkdir ~/.hal on Kubemaster. Created the service account as instrcuted in the site.
Copied the kubeconfig file from ./kube/config into ~/.hal/kubeconfig as it didnt work with docker -v option, there was some permission issue, so made it work this way
docker run halyard command -- all up and running fine.
Ran Bash and Inside halyard.
Now when I do these two things inside halyard
Point kubectl to the kubeconfig by export KUBECONFIG command
Enable kubernetes provider "hal config provider kubernetes enable"
The command gets executed sometimes successfully or it fails with this warning after timeout error
Getting object contents of versions.yml
Unexpected error comparing versions: com.netflix.spinnaker.halyard.core.error.v1.HalException: Could not load "versions.yml" from config bucket: www.googleapis.com.*
Even if it somehow manages to run successfully. When I run these,
CONTEXT=$(kubectl config current-context)
hal config provider kubernetes account add my-k8s-account --context $CONTEXT
It fails with the same error as above.
Total weird stuff. Its intermittent. Does it have something to do with the kubeconfig file? Any pointers or help would be greatly appreciated.
Thanks.
As noted in comments these kind of errors could result when there lack of network connectivity from inside the container.
As Vikram mentioned in his comment:
Yes, that was the problem. Azure support recommended installing a CNI plugin and it resolved the issue. So, it seems like inside of Azure VM without a Public IP, the CNI plugin is needed for a VM To connect to internet.
To configure CNI plugin on Azure platform use this guide.
Hope it helps.

Instructions to install addons with Kubernetes 1.6 on bare metal machine?

I have setup my kubernetes cluster from scratch following this doc: https://kubernetes.io/docs/getting-started-guides/scratch/
My kubernetes master and worker are working correctly, but I didn't find the instruction to deploy dns addons.
Addons can be deployed through yaml files as well as using the addon manager. I have already installed dashboard, monitoring, DNS manually using the yaml files provided (with small modifications) in this repo.
Please note addon-manager is pretty special, You should copy all files into a directory then:
./kube-addons.sh
Btw I prefer installing addons manually instead of using addon manager.
DNS addon manual example:
Take the kubedns-controller.yaml.sed,
Replace the $DNS_DOMAIN with cluster.local(you should use the domain specified in your setup here). You can also set it as a variable. Please note there are multiple occurrences in this file.
Then:
mv kubedns-controller.yaml.sed kubedns-deployement.yaml
kubectl create -f kubedns-deployement.yaml

What's does kube-down need?

I am running Kubernetes on AWS with my friends. The procedure is below.
My friend run kube-up.sh on his laptop.
We share kubeConfig file and environment variables so I can also connect to kubernetes.
But the problem is I cannot run kube-down.sh on my own laptop.
So I would like to know what else do I need in order to run kube-down.sh?
Thanks
You need a kubernetes release. You should download the same version that was used to create the cluster, unpack it, change into the kubernetes directory, and then run kube-down.sh.

Google Cloud - Deploy App to Specific VM Instance

I am using Google Cloud / Google Compute to host my application. I was on Google App Engine and I am migrating my code to Google Compute in order to use a customized VM Instance.
I am using the tutorial here, and I am deploying my app using:
$ gcloud preview app deploy
I setup a custom VM Instance using the "Create Instance" option at the top of my Google Cloud Console:
However, when I use the standard deploy gcloud command, my app is deployed to Managed VMs (managed by Google), and I have no control over those servers. I need to run the app on my custom VM because it has some custom OS-level software.
Any ideas on how to deploy the app to my custom VM Instance only? Even when I delete all the Managed VMs and try to deploy, the VMs are just re-created by Google.
The gcloud app deploy command can only be used to deploy the app to classic AppEngine sandboxed environment or to the Managed VMs. It cannot deploy your application to an instance running on GCE.
You will need to incorporate your own deployment method/script depending on the programming language you're using. Of course, since GCE is just an infrastructure-as-a-service environment (versus AppEngine being a platform-as-a-service), you will also need to take care of high-availability (what happens when your instance becomes unavailable?), scalability (what happens when one instance is not enough to sustain the load of your application?), load balancing and many more topics you'll need to address.
Finally, If you need to install packages on your application servers you may consider taking the Managed VMs route. It manages for you all the infrastructure related matters (scalability, elasticity, monitoring etc) and still allows you to have your own custom runtime. It's still beta though...
How to create a simple static Website and deploy it on Google cloud VM instance
Recommended: Docker and Google Cloud SDK should be installed
Step:1
Create a Folder “personal-website” with index.html and frontend files on your local computer
Step:2
Inside “personal-website” folder create a Dockerfile
Write two lines
FROM httpd
COPY . /usr/local/apache2/htdocs/personal-website
Step:3
Build image with docker and push it to Google cloud registry
You should have google cloud sdk and project selected and docker authorized
Select Project using these commands:
gcloud config set project [PROJECT_ID]
gcloud config set compute/zone us-central1-b
After that Run these commands
1. export PROJECT_ID="$(gcloud config get-value project -q)"
2. docker build -t gcr.io/${PROJECT_ID}/personal-website:v1 .
3. gcloud auth configure-docker
4. docker push gcr.io/${PROJECT_ID}/personal-website:v1
Step:4
Create a VM instance with command with container running into it
Run Command
1. gcloud compute instances create-with-container apache-vm2 --container-image gcr.io/test-project-220705/personal-website:v1