Instructions to install addons with Kubernetes 1.6 on bare metal machine? - kubernetes

I have setup my kubernetes cluster from scratch following this doc: https://kubernetes.io/docs/getting-started-guides/scratch/
My kubernetes master and worker are working correctly, but I didn't find the instruction to deploy dns addons.

Addons can be deployed through yaml files as well as using the addon manager. I have already installed dashboard, monitoring, DNS manually using the yaml files provided (with small modifications) in this repo.
Please note addon-manager is pretty special, You should copy all files into a directory then:
./kube-addons.sh
Btw I prefer installing addons manually instead of using addon manager.
DNS addon manual example:
Take the kubedns-controller.yaml.sed,
Replace the $DNS_DOMAIN with cluster.local(you should use the domain specified in your setup here). You can also set it as a variable. Please note there are multiple occurrences in this file.
Then:
mv kubedns-controller.yaml.sed kubedns-deployement.yaml
kubectl create -f kubedns-deployement.yaml

Related

How to develop using OpenVSCode and a remote container

I would like to use OpenVSCode for cloud development in a microservices-orianted environment.
I was thinking on the following architecture/setup:
Use K8s as the runtime environment.
OVSC & Dev pods to run using dedicated/separated pods (Not sidecars).
Code sharing is done via NFS, syncthing, etc
The documentation are showcasing a setup of OVSC that operate/run as the Dev pod itself. While running as described above (IDE & Dev pods running on a separated pod), I noticed that dev-related libraries/missing (e.g. Golang packages) are not available as they are installed on the dev pod, etc:
Q:
What is needed in order to support such a setup?
Is it possible to init OVSC in such a way that it will execute commands/open the terminal on a remote container as default?
Thanks!

Container deployment with self-managed kubernetes in AWS

I am relatively new to AWS and kubernetes. I have created a self-managed kubernetes cluster running in AWS (not using EKS). I have successfully created a pipeline in AWS CodePipeline that builds my container and uploads it to ECR. Currently I am manually deploying the created image in the cluster by running the following commands:
kubectl delete deployment my-service
kubectl apply -f my-service_deployment.yaml
How can I automate this manual step in AWS CodePipeline? How can I run the above commands as part of the pipeline?
Regarding my deployment yaml files, where should I store these files? (currently I store them locally in the master node.)
I am missing some best practices for this process.
Your yaml manifests should'nt be on your master node (never), they should be stored in a Version Control System (just like github/gitlab/bitbucket etc.).
To automate the deployment of your docker image based on new artifact version in ECR, you can use a great tools named FluxCD, it is actually very simple to install (https://fluxcd.io/docs/get-started/) and you can easily configure it to automatically deploy your images in your cluster each time there is a new image on your ECR registry.
This way your codePipeline will build the code, do the tests, build the image, tag it and push it to ECR and FluxCD will deploy it to kubernetes. (it is also natively configurable to deploy on each X minutes (based on your configuration) on your cluster, so even if you bring a little change into your manifests, it will be automatically deployed !
bguess
you can also make use of argo cd its very easy to install and use compared to aws codepipeline.
argo cd was specifically designed for Kubernetes thus offers much better way to deploy to K8s

Override deployd Helm-chart values on GKE with values from a file in the local machine?

I would like to change my deployed(GKE) Helm Chart values file with the ones that are inside my local file, basically to do this:
helm upgrade -f new-values.yml {release name} {package name or path}
So I've make all the changes inside my local file, but the deployment is inside the GKE cluster.
I've connected to my cluster via ssh, but how can I run the above command in order to perform the update if the file with the new values is on my local machine and the deployment is inside GKE cluster?
Maybe somehow via the scp command?
Solution by setting up required tools locally (you need a while or two for that)
You just need to reconfigure your kubectl client, which can be done pretty straighforward. When you log in to GCP Console -> go to Kubernetes Engine -> Clusters -> click on Actions (3 vertical dots to the right of the cluster name) -> select Connect -> copy the command, which may resemble the following one:
gcloud container clusters get-credentials my-gke-cluster --zone europe-west4-c --project my-project
It assumes you have your Cloud SDK and kubectl already installed on your local machine. If you have not, here you have step-by-step description how to do that:
Installing Google Cloud SDK [Debian/Ubuntu] (if you use a different OS, simply choose another tab)
Installing kubectl tool [Debian/Ubuntu] (choose your OS if it is something different)
Once you run the above command on your local machine, your kubectl context will be automatically set to your GKE Cluster even if it was set before e.g. to your local Minikube instance. You can check it by running:
kubectl config current-context
OK, almost done. Did I also mention helm ? Well, you will also need it. So if you have not installed it on your local machine previously, please do it now:
Install helm [Debian/Ubuntu]
Alternative slution using Cloud Shell (much quicker)
If installing and configuring it locally seems to you too much hassle, you can simply use a Cloud Shell (I bet you've used it before). In case you didn't, once logged in to your GCP Console click on the following icon:
Once logged into Cloud Shell, you can choose to upload your local files there:
simply click on More (3 dots again):
and choose Upload a file:

Error when installing Spinnaker on Kubernetes on prem cluster

I'm trying to install Spinnaker on a Kubernetes setup onprem.
Following instructions from https://www.spinnaker.io/setup/
Install and run Halyard as Docker on the Kubernetes master.
Run everything as root
mkdir ~/.hal on Kubemaster. Created the service account as instrcuted in the site.
Copied the kubeconfig file from ./kube/config into ~/.hal/kubeconfig as it didnt work with docker -v option, there was some permission issue, so made it work this way
docker run halyard command -- all up and running fine.
Ran Bash and Inside halyard.
Now when I do these two things inside halyard
Point kubectl to the kubeconfig by export KUBECONFIG command
Enable kubernetes provider "hal config provider kubernetes enable"
The command gets executed sometimes successfully or it fails with this warning after timeout error
Getting object contents of versions.yml
Unexpected error comparing versions: com.netflix.spinnaker.halyard.core.error.v1.HalException: Could not load "versions.yml" from config bucket: www.googleapis.com.*
Even if it somehow manages to run successfully. When I run these,
CONTEXT=$(kubectl config current-context)
hal config provider kubernetes account add my-k8s-account --context $CONTEXT
It fails with the same error as above.
Total weird stuff. Its intermittent. Does it have something to do with the kubeconfig file? Any pointers or help would be greatly appreciated.
Thanks.
As noted in comments these kind of errors could result when there lack of network connectivity from inside the container.
As Vikram mentioned in his comment:
Yes, that was the problem. Azure support recommended installing a CNI plugin and it resolved the issue. So, it seems like inside of Azure VM without a Public IP, the CNI plugin is needed for a VM To connect to internet.
To configure CNI plugin on Azure platform use this guide.
Hope it helps.

how to use binary file to create a cluster in bare metal

Already to download the all binary about kubernetes,
the directory:
~/vagrant/kubernetes/server/kubernetes/server/bin$ ls
federated-apiserver kubelet
hyperkube kubemark
kube-apiserver kube-proxy
kube-apiserver.docker_tag kube-proxy.docker_tag
kube-apiserver.tar kube-proxy.tar
kube-controller-manager kube-scheduler
kube-controller-manager.docker_tag kube-scheduler.docker_tag
kube-controller-manager.tar kube-scheduler.tar
kubectl
Can use these binary directly to create a cluster?
Yes, but unfortunately it is a non-trivial task to start with plain binaries and end up with a fully functional cluster.
To create a cluster, I'd recommend following one of the many supported solutions. If you want to create a cluster without using one of the existing scripts, you can follow the Creating a Custom Cluster from Scratch guide.
I downloaded the tar.gz(flannel\etcd\kubernetes) and modified download-release.sh by changing curl to tar local files directly.Then i run kube-up.sh and created a cluster with downloaded files.